How to truly fix a network with BN layers in training?

Suppose I use a pretrained network as the backbone, then extra layers are add afterwards to form a new model.
I only train the parameters of extra layers. Several epochs later, I investigate the model parameters of two trained models. All the parameters are the same in the fixed backbone, however, it outputs different backbone features during evaluation. And the differences come out after the first BN layer, although the parameters are the same. I realize that running_var and running_mean changes during training, which is the main reason.

If I want to keep the running statistics of pretrianed backbone, and I DO NOT want it to be changed when training (Truly fix the model with BN layers), and when testing I also need the running statistics kept the same as the pretrianed ones, then how should I do ?

then you should just set the backbone in eval mode.