I’ve trained a quantized model (ResNet50) and exported to ONNX. I want to infer this model in TensorRT. But the ONNX model is not parsed correctly, so the TensorRT engine is not created.
I test the model (from training to inference) with previous version of pytorch and it works correctly.
The error is here:
[09/20/2022-08:16:58] [TRT] [E] onnx::QuantizeLinear_1259: invalid weights type of Int8
ERROR: Failed to parse the ONNX file.
In node 0 (parseGraph): INVALID_NODE: Invalid Node - Identity_0
onnx::QuantizeLinear_1259: invalid weights type of Int8
The current version that gives me the error is: