Torch.jit.trace throwing an error for quantized model

Hi,
I am trying to quantize the CRAFT pre-trained model and optimize it for Android mobile. I am getting below error while calling torch.jit.trace
“NotImplementedError: Could not run ‘quantized::conv2d.new’ with arguments from the ‘CPU’ backend.”
[Pretrained model is present GitHub - clovaai/CRAFT-pytorch: Official implementation of Character Region Awareness for Text Detection (CRAFT)]

The code I used is below. Please let me know where I am going wrong?

model = craft.CRAFT()
model.load_state_dict(copyStateDict(torch.load(‘craft_mlt_25k.pth’, map_location=‘cpu’)))
model.eval()

backend = ‘qnnpack’
qconfig = torch.quantization.get_default_qconfig(backend)
model.qconfig = qconfig

torch.quantization.prepare(model, inplace=True)
torch.quantization.convert(model, inplace=True)

example = torch.randn(1, 3, 1024, 1024)
traced_script_module = torch.jit.trace(model, example)
traced_script_module_optimized = optimize_for_mobile(traced_script_module)
traced_script_module_optimized._save_for_lite_interpreter(‘craft_mlt_25k.ptl’)

I have also tried torch.jit.script instead of torch.jit.trace and ended up in error mentioned in Convert pytorch model to ptl