[IOS metal] Could not run 'metal_prepack::linear_run' with arguments from the 'CPU' backend

Hi, I’m very new for using PyTorch on iOS so my question may sound silly. I am keep having this Could not run ‘metal_prepack::linear_run’ with arguments from the ‘CPU’ backend error. While the error message is very clear, I have already converted my input tensor with at::Tensor(input).metal() before pass it into the model forward method. I am not sure at what step I did something wrong. The optimised torchscript graph (show below) seems to be taking my input directly (which is correct). so I am lost.

%prepack_folding_forward._jit_pass_packed_weight_0 : __torch__.torch.classes.metal.LinearOpContext = prim::GetAttr[name="prepack_folding_forward._jit_pass_packed_weight_0"](%self.1)
  %124 : Tensor = metal_prepack::linear_run(%encoder_outputs.1, %prepack_folding_forward._jit_pass_packed_weight_0)

Furthermore, the error message from the PyTorch library on gives
’metal_prepack::linear_run’ is only available for these backends: [Named, VE, QuantizedCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID].

This doesn’t really help the debugging process :frowning:

Anyway, I hope someone who can help me to diagnose my problem.

Thanks!

Xun.

Xun, did you follow this tutorial ((Prototype) Use iOS GPU in PyTorch — PyTorch Tutorials 1.11.0+cu102 documentation) to build pytorch with metal support (or download the nightly from cocoapods) and optimize the model using metal backend? The error message suggests you are trying to run this model on CPU.