Torchscript with vulkan backend fails to interpret on Android Phone

I have generated a .pt file using the PyTorch Android documentation as given here PyTorch Vulkan Backend User Workflow — PyTorch Tutorials 1.13.0+cu117 documentation

I am able to run the example as given in the tutorial properly. However, when I try to run the model created by me it fails with the following error:

E/libc++abi: terminating with uncaught exception of type std::runtime_error: The following operation failed in the TorchScript interpreter.
    Traceback of TorchScript, serialized code (most recent call last):
      File "code/__torch__/", line 232, in forward
          running_mean0 = _110.running_mean
          running_var0 = _110.running_var
          _59 = torch.instance_norm(input14, None, None, running_mean0, running_var0, False, 0.10000000000000001, 1.0000000000000001e-05, True)
                ~~~~~~~~~~~~~~~~~~~ <--- HERE
          input15 = _59
        map3 = torch.relu_(input15)
    Traceback of TorchScript, original code (most recent call last):
      File "/home/miniconda3/envs/pytorch-dev/lib/python3.8/site-packages/torch/nn/", line 2495, in forward
        if use_input_stats:
        return torch.instance_norm(
               ~~~~~~~~~~~~~~~~~~~ <--- HERE
            input, weight, bias, running_mean, running_var, use_input_stats, momentum, eps, torch.backends.cudnn.en

Does it mean that

  1. instance_norm is not yet supported for the vulkan backend or
  2. The torchscript file is not properly generated or
  3. It is an Android Issue?

If instance_norm is not supported then why don’t we have a CPU FALLBACK mechanism like we have for Android NNAPI?

Please don’t tag specific users, as it could discourage others to post a valid answer, and you might tag someone who is not familiar with this particular question (in this case I don’t know enough about Android to help you out).

Thanks for the heads up

1 Like

It’s fixed!, This issue is not present in the latest PyTorch master