I am trying to export the pretrained quantized models to ONNX, but it fails. I’ve tried for GoogleNet, ResNet18 and Mobilenet v2, but none of those exported.
I got the following error for GoogleNet and ResNet18
RuntimeError: self.qscheme() == at::kPerTensorAffine INTERNAL ASSERT FAILED at /pytorch/aten/src/ATen/native/quantized/QTensor.cpp:162, please report a bug to PyTorch. clone for quantized Tensor only works for PerTensorAffine scheme right now