I have a 2dConvNet that i export from PyTorch to ONNX and TorchScript. However, while TorchScript output matches the native PyTorch code’s output, ONNX model that i do inference with onnxruntime-gpu, does not match the source code’s output.
I compare the outputs with real-life data and with torch.isclose(atol=rtol=1e-5)
. When i use torch.zeros
, torch.ones
, torch.rand
as input, it passes the test but input with higher dynamic range makes outputs differ.
Here are script, onnx and sample input data i’ve been using. link