I am trying to export the pretrained quantized models to ONNX, but it fails. I’ve tried for GoogleNet, ResNet18 and Mobilenet v2, but none of those exported.
I got the following error for GoogleNet and ResNet18
RuntimeError: self.qscheme() == at::kPerTensorAffine INTERNAL ASSERT FAILED at /pytorch/aten/src/ATen/native/quantized/QTensor.cpp:162, please report a bug to PyTorch. clone for quantized Tensor only works for PerTensorAffine scheme right now
Hey, i’ve changed qconfig to torch.quantization.get_default_qconfig(‘qnnpack’) and error RuntimeError: self.qscheme() == at::kPerTensorAffine INTERNAL ASSERT FAILED is not an issue any more. This is just workaround.
Now i have an error KeyError: ‘conv2d_relu’ after small changed.