We have been able to build pytorch 1.9 lite static libs defining OP_LIST, and run it also with NNAPI models (See discussion at About build_android.sh, LITE and NNAPI).
Now we are in the process of switching from torch 1.9 to 1.10, with our patch (see previous link) we can build and run lite with NNAPI as long as we do not use the OP_LIST. But if we build pytorch with our OP_LIST (that worked fine for 1.9, and we regenerated with the same process for 1.10 using
torch.jit.export_opnames(model)) we get the following error:
E/libc++abi: terminating with uncaught exception of type c10::Error: Creation of quantized tensor requires quantized dtype like torch.quint8 () Exception raised from empty_affine_quantized_other_backends_stub at ../aten/src/ATen/native/quantized/TensorFactories.cpp:79 (most recent call first): (no backtrace available)
Any hint on how to fix this? Maybe a missing operator in our OP_LIST yaml file?