How to fix BatchNorm parameters during finetune

I have small dataset on which I want fine tune weights of the CNN but I do not want what little dataset to mess up my batchnorm parameters haw can I freeze them during fine tune train?

Answer: Set all batchnorms.eval()

1 Like