Skip to content
This repository has been archived by the owner on Oct 25, 2024. It is now read-only.

Commit

Permalink
Fixed issue of loading woq model for intel GPU (#1614)
Browse files Browse the repository at this point in the history
Signed-off-by: Cheng Penghui <[email protected]>
  • Loading branch information
PenghuiCheng authored Jun 21, 2024
1 parent 851d157 commit 31d3f26
Showing 1 changed file with 1 addition and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ def build_woq_model(model, quantization_config):

def convert_model_to_public(model):
# reorder weight and scales if they have been transposed
if model.device == "xpu":
if model.device == "xpu" or (isinstance(model.device, torch.device) and model.device.type == "xpu"):
for name, module in model.named_modules():
if isinstance(module, WeightOnlyQuantizedLinear):
if module.weight_transposed:
Expand Down

0 comments on commit 31d3f26

Please sign in to comment.