ONNX export of quantized model

@neginraoof Can you post a small example showing how to export a quantized model? Should it work with static quantization as well as QAT? I’m on 1.8.1 and tried @G4V’s example from here, but still get the following error even with ONNX_ATEN_FALLBACK

AttributeError: 'torch.dtype' object has no attribute 'detach'
4 Likes