ONNX export of quantized model

UPD: If I script it with torch.jit.script(model_int8) before torch.onnx.export I get another error:
RuntimeError: Exporting the operator quantize_per_tensor to ONNX opset version 9 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.

1 Like