Missing quantized dtype like torch.quint8 when building 1.10 with operator list

We have been able to build pytorch 1.9 lite static libs defining OP_LIST, and run it also with NNAPI models (See discussion at About build_android.sh, LITE and NNAPI).
Now we are in the process of switching from torch 1.9 to 1.10, with our patch (see previous link) we can build and run lite with NNAPI as long as we do not use the OP_LIST. But if we build pytorch with our OP_LIST (that worked fine for 1.9, and we regenerated with the same process for 1.10 using torch.jit.export_opnames(model)) we get the following error:

E/libc++abi: terminating with uncaught exception of type c10::Error: Creation of quantized tensor requires quantized dtype like torch.quint8 ()
    Exception raised from empty_affine_quantized_other_backends_stub at ../aten/src/ATen/native/quantized/TensorFactories.cpp:79 (most recent call first):
    (no backtrace available)    

Any hint on how to fix this? Maybe a missing operator in our OP_LIST yaml file?

You may need to use the tracing based selective build in 1.10 to first trace the model and update the OP_LIST. cc @cccclai

How does tracing works for creating/integrating the OP_LIST?

Maybe trying following this? Conclusion — PyTorch Tutorials 1.10.0+cu102 documentation

1 Like

I doubt there might be some issues to build tracer in master. Let me follow up on it

Thanks! Just to understand, in which way the tracing based selective build is better than the static analysis performed by export_opnames? If the input coverage is extensive, is it expected to be more complete?