How to fine tune pretrained model with BatchNorm

Hi
I am now trying to train a new model with a self-defined classifier in vgg19_bn, I set the features part to eval() mode and requires_grad = False. After training, the loss start from 10 down to about 0.015.
Now I want to fine tune the whole model, the full model was set to train() mode, but got an abnormal loss (about 2.7). This didn’t happen when I did this on vgg19 so I guess the problem is on the BatchNorm layers. Then, I tried to set each BatchNorm layer to eval() mode, the loss became normal (about 0.017).
I want to ask is that right to set each BatchNorm layer to eval() mode when I want to fine tune the whole model or what is the best way to do?

BTW, the batch size is 16, and all the other parameter is default.