Finetuning with torch.nn.BatchNorm2d, running statistics changed with num_batches_tracked unchanged


I am fine-tuning from a trained model. To freeze BatchNorm2d layers, I set all of them to eval mode during training. But I find a strange thing. After a few epochs, I find running_mean and running_var have changed a bit (most of the change is very small; the absolute sum of the change is like 2e-4, but the largest one reach 1.65). Also, I find num_batches_tracked is not changed for all the BatchNorm2d layers.

I wonder why running_mean and running_var change, if no new batch statistics are accumulated? I am using Automatic Mixed Precision in training, could it be related to this? Any suggestions?

Thank you for your help.

This shouldn’t be the case. Could you post a minimal, executable code snippet to reproduce the issue?

The project is a bit complex. I’ll try to extract a minimal sample.