Weird BatchNorm behaviour

I have a pretrained CNN model which has several BatchNorm2d layers.

I take a batch with 1 image, set model.eval() and run forward pass.
If I run the same image but with model.train() then I get different output!

As I understood the only difference of the BatchNorm behaviour in eval and train mode is that in eval it doesn’t update moving averages. It means that in my experiment I should have got the same output in both cases!
Could you please point me out to my mistake?

In the train mode, batch normalization calculates the statistics for the batch you provide, normalizes by calculated statistics and applies affine transformation. In the eval mode, instead of calculating statistics, batch norm uses provided moving averages to normalize. Having different output in eval and train mode is expected.