/usr/local/lib/python3.7/dist-packages/torch/utils/mobile_optimizer.py in optimize_for_mobile(script_module, optimization_blocklist, preserved_methods, backend)
67 optimized_cpp_module = torch._C._jit_pass_vulkan_optimize_for_mobile(script_module._c, preserved_methods_str)
68 elif backend == ‘metal’:
—> 69 optimized_cpp_module = torch._C._jit_pass_metal_optimize_for_mobile(script_module._c, preserved_methods_str)
70 else:
71 raise TypeError(“Unknown backend, must be one of ‘CPU’, ‘Vulkan’ or ‘Metal’”)
RuntimeError: 0INTERNAL ASSERT FAILED at “…/torch/csrc/jit/ir/alias_analysis.cpp”:584, please report a bug to PyTorch. We don’t have an op for metal_prepack::conv2d_prepack but it isn’t a special case. Argument types: Tensor, Tensor, int[], int[], int[], int, NoneType, NoneType,
Yes, I think exporting the model should work on different platform, but the execution would then need the Metal backend. Are you seeing another issue with it, since you’ve created the topic?
Did you create an issue on GitHub, so that the devs could take a look at the issue?
Is there a case sensitive issue or just no unit test around this for the changes to the metal to conv2d prepack? Seems like it is supported, just not working, maybe a previous version works.
The error seems to be the error is
/usr/local/lib/python3.7/dist-packages/torch/utils/mobile_optimizer.py in optimize_for_mobile(script_module, optimization_blocklist, preserved_methods, backend)
67 optimized_cpp_module = torch._C._jit_pass_vulkan_optimize_for_mobile(script_module._c, preserved_methods_str)
68 elif backend == ‘metal’:
—> 69 optimized_cpp_module = torch._C._jit_pass_metal_optimize_for_mobile(script_module._c, preserved_methods_str)
70 else:
71 raise TypeError(“Unknown backend, must be one of ‘CPU’, ‘Vulkan’ or ‘Metal’”)
Seems like a small compilation issue. I tried switching the label on export to ‘Metal’ no affect