Different ways to freeze a layer?

If you want to freeze a sub-module of an nn.Module object (e.g. you want to freeze some layer of a NN), what’s the difference between:

A) model.layer_to_freeze.eval()

B) model.layer_to_freeze.requires_grad_(False)



train()/eval() will change the behavior of some layers and are used during training and evaluatio, respectively. E.g. batchnorm layers will use the running stats to normalize the input during evaluation instead of the input activation stats and dropout layers will be disabled during evaluation.
Setting the requires_grad attribute of parameters will disable gradient calculation for these parameters and is commonly referred to when you are “freezing” layers.