Freezing some parameters in a layer


I’d like to freeze a set of selected parameters in a given layer when I’m training a neural network.
How’s this done in PyTorch?

Have a look at this tutorial.

This seems to be freezing weights layer-wase, i.e., all the params of a layer are frozen.

What I want is something more fine-tuned. For example, I’d like to identify top 5% of parameters (by their weights) in a given layer and only freeze those.

In this case you won’t be able to use the requires_grad attribute, as it’s defined for the whole parameter.
However, you could zero out the gradients of the specific values which shouldn’t be updated using a hook via param.register_hook.
Note that even with a zero gradient your parameters might still be updated, e.g. if you are using weight decay for all parameters.