Question about FrozenBatchNorm for finetune

Hi, I have a quick questions about the FrozenBatchNorm.

When using a pretrained model for fintuning, I’ve heard that it is common for the BatchNorm to freeze. But my data is much different from the existing pre-trained ones, so I want to train this as well. Is it better to freeze only the running statistics? Or should I change all the parameters in BatchNorm if I want to unfreeze it?