I have a pretrained CNN model which has several BatchNorm2d
layers.
I take a batch with 1 image, set model.eval()
and run forward pass.
If I run the same image but with model.train()
then I get different output!
As I understood the only difference of the BatchNorm
behaviour in eval
and train
mode is that in eval
it doesn’t update moving averages. It means that in my experiment I should have got the same output in both cases!
Could you please point me out to my mistake?