Check fburl.com/missing_ops for the fix.{prim::is_quantized, }

pytorch: 1.9.1

android: 11

android gradle:

implementation 'org.pytorch:pytorch_android_lite:1.9.0'
implementation 'org.pytorch:pytorch_android_torchvision:1.9.0'

problem:

com.facebook.jni.CppException: Following ops cannot be found. Check fburl.com/missing_ops for 
the fix.{prim::is_quantized, } ()

Test code:

class AnnotatedConvBnReLUModel(torch.nn.Module):
    def __init__(self):
        super(AnnotatedConvBnReLUModel, self).__init__()
        self.conv = torch.nn.Conv2d(3, 5, 3, bias=False).to(dtype=torch.float)
        self.bn = torch.nn.BatchNorm2d(5).to(dtype=torch.float)
        self.hs = torch.nn.Hardswish()
        self.quant = torch.quantization.QuantStub()
        self.dequant = torch.quantization.DeQuantStub()

    def forward(self, x):
        x = self.quant(x)
        x = self.conv(x)
        x = self.bn(x)
        x = self.hs(x)
        x = self.dequant(x)

        return x

The android lib “pytorch:pytorch_android_lite” dose not support torch.nn.Hardswish ?

model = torchvision.models.quantization.mobilenet_v3_large(pretrained=False, quantize=True)

and the mobilenet_v3_large has the same problem, why?

Are you using a custom build of pytorch?

Did you run optimize_for_mobile when exporting the model?

def convert_to_mobile(self):
    logging.info("mode: convert_to_mobile")
    self.net.eval()

    logging.info("step1: quantize model")
    model_quantizer = ModelQuantizer()
    quantized_model = model_quantizer.quantize_model(self.cfg, self.net, self.cur_device, "qnnpack")
    # quantized_model = self.net

    logging.info("step2: script/trace model")
    traced_script_model = torch.jit.script(quantized_model)

    logging.info("step3: model optimization")
    traced_script_model_optimized = optimize_for_mobile(traced_script_model)
    # traced_script_model_optimized = traced_script_model

    logging.info("step4: save mobile model")
    mobile_model_path = self.cfg['common']['convert_model_mobile']
    traced_script_model_optimized._save_for_lite_interpreter(mobile_model_path)

yes, Is there a problem with the custom build of pytorch ?

i’d try it with a hardswish module rather than a functional, it looks like pytorch_android_lite doesn’t like the quantization check in the quantized hardswish op.

alternatively you can just remove the line: pytorch/functional.py at 871a31b9c444e52ba0cc6667fb317c11802ac4de · pytorch/pytorch · GitHub

In your custom version of pytorch and see if that works.

The func “torch.nn.functional.hardsigmoid” can work instead of torch.nn.HardSigmoid