I have generated a .pt file using the PyTorch Android documentation as given here PyTorch Vulkan Backend User Workflow — PyTorch Tutorials 1.13.0+cu117 documentation
I am able to run the example mobilenetv2.pt as given in the tutorial properly. However, when I try to run the model created by me it fails with the following error:
E/libc++abi: terminating with uncaught exception of type std::runtime_error: The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File "code/__torch__/___torch_mangle_1036.py", line 232, in forward running_mean0 = _110.running_mean running_var0 = _110.running_var _59 = torch.instance_norm(input14, None, None, running_mean0, running_var0, False, 0.10000000000000001, 1.0000000000000001e-05, True) ~~~~~~~~~~~~~~~~~~~ <--- HERE input15 = _59 map3 = torch.relu_(input15) Traceback of TorchScript, original code (most recent call last): File "/home/miniconda3/envs/pytorch-dev/lib/python3.8/site-packages/torch/nn/functional.py", line 2495, in forward if use_input_stats: _verify_spatial_size(input.size()) return torch.instance_norm( ~~~~~~~~~~~~~~~~~~~ <--- HERE input, weight, bias, running_mean, running_var, use_input_stats, momentum, eps, torch.backends.cudnn.en
Does it mean that
- instance_norm is not yet supported for the vulkan backend or
- The torchscript file is not properly generated or
- It is an Android Issue?
If instance_norm is not supported then why don’t we have a CPU FALLBACK mechanism like we have for Android NNAPI?