Error running jit model on medal, any possible solutions?

Open to ideas on if this is an actual missing issue or something I can solve.

RuntimeError Traceback (most recent call last)
in ()
1 kp_driving = kp_detector(image_tensor)
2 traced_generator = torch.jit.trace(generator, (image_tensor, kp_driving, kp_driving), strict=False)
----> 3 optimized_traced_generator = optimize_for_mobile(traced_generator, backend=‘metal’)
4 optimized_traced_generator._save_for_lite_interpreter("/content/generator_metal.pt")

/usr/local/lib/python3.7/dist-packages/torch/utils/mobile_optimizer.py in optimize_for_mobile(script_module, optimization_blocklist, preserved_methods, backend)
67 optimized_cpp_module = torch._C._jit_pass_vulkan_optimize_for_mobile(script_module._c, preserved_methods_str)
68 elif backend == ‘metal’:
—> 69 optimized_cpp_module = torch._C._jit_pass_metal_optimize_for_mobile(script_module._c, preserved_methods_str)
70 else:
71 raise TypeError(“Unknown backend, must be one of ‘CPU’, ‘Vulkan’ or ‘Metal’”)

RuntimeError: 0INTERNAL ASSERT FAILED at “…/torch/csrc/jit/ir/alias_analysis.cpp”:584, please report a bug to PyTorch. We don’t have an op for metal_prepack::conv2d_prepack but it isn’t a special case. Argument types: Tensor, Tensor, int[], int[], int[], int, NoneType, NoneType,

Based on the error message it sounds like an unexpected bug, so could you create a GitHub issue so that we can track and fix it, please?

Is it possible to export metal model on linux? experiencing a similar issue.

Yes, I think exporting the model should work on different platform, but the execution would then need the Metal backend. Are you seeing another issue with it, since you’ve created the topic?
Did you create an issue on GitHub, so that the devs could take a look at the issue?

A collegue opened one up.

Let me know your thoughts.

Matt

1 Like

Just looking at this more closely.

Is there a case sensitive issue or just no unit test around this for the changes to the metal to conv2d prepack? Seems like it is supported, just not working, maybe a previous version works.

I see the support implementation in PRs like [iOS GPU][Stub] Move conv2d_prepack impl from MetalPrepackOpRegister.… · pytorch/pytorch@8ae8fb7 · GitHub

The error seems to be the error is
/usr/local/lib/python3.7/dist-packages/torch/utils/mobile_optimizer.py in optimize_for_mobile(script_module, optimization_blocklist, preserved_methods, backend)
67 optimized_cpp_module = torch._C._jit_pass_vulkan_optimize_for_mobile(script_module._c, preserved_methods_str)
68 elif backend == ‘metal’:
—> 69 optimized_cpp_module = torch._C._jit_pass_metal_optimize_for_mobile(script_module._c, preserved_methods_str)
70 else:
71 raise TypeError(“Unknown backend, must be one of ‘CPU’, ‘Vulkan’ or ‘Metal’”)

Seems like a small compilation issue. I tried switching the label on export to ‘Metal’ no affect