Performance highly degraded when eval() is activated in the test phase

I have a similar problem. The evaluation loss while using track_running_stats = True is enormous. The only solution is to set it to track_running_stats = False, but unfortunately, it causes that model cannot be evaluated on a batch_size = 1.Does the model calculate running_std and running_var in model.eval() , I thought that while track_running_stats = False there is no need for them to be computed. Could you pleas take a look at my post: Batch norm training mode error despite model.eval()