Inference model with BatchNorm2D generates different output via PyTorch and OnnxRuntime

My setup is:
OS - Microsoft Windows 10 Enterprise 2016 LTSB
GPU - Quadro M2000M
CUDA vers - 9.0, 10.0 & 11.0
Visual Studio - 2017, 2019
Python - 3.6.8
Torch 1.1.0\1.6.0
onnxruntme - 1.5.1
onnxruntime-gpu - 1.4.0
onnx version - 1.7.0

Please refer to the following link which show that torch Onnx exporter maybe has a problem with BatchNorm2D operation:
BatchNorm2D Export problem

You will be able to find there all required information which describe the problem.

I thought that it will push forward the investigation if I will open an issue here also about the referenced Onnxruntime issue.

Thanks,

Hello, is there any update regarding this topic?

Hello,
Any kind of support will be much appreciated.

It is a major concern that we cannot include BatchNorm2D operator in our models due to *.onnx export problem.

In order to be able using NVIDIA TensorRT sdk, we shall understand what is the root cause of the wrong BatchNorm2D behavior after converting the TorchScript *.pt to *.onnx format.

Thank,