ONNX vs Torch Output Mismatch

I have a 2dConvNet that i export from PyTorch to ONNX and TorchScript. However, while TorchScript output matches the native PyTorch code’s output, ONNX model that i do inference with onnxruntime-gpu, does not match the source code’s output.
I compare the outputs with real-life data and with torch.isclose(atol=rtol=1e-5). When i use torch.zeros, torch.ones, torch.rand as input, it passes the test but input with higher dynamic range makes outputs differ.

Here are script, onnx and sample input data i’ve been using. link

ONNX outputs on GPU are not necessarily deterministic Inference on GPU is not deterministic · Issue #4611 · microsoft/onnxruntime · GitHub

1 Like

I finally completed the production code and deployed the model. Saw that predictions from ONNX-> Tensorrt model is still accurate.

1 Like