Updating frozen parameters / Fine Tuning

Hi,

This question is related to a question that came up to me after reading the official tutorial on fine tuning.

Suppose we follow this tutorial desiring only the feature extraction from the convolutional layers and updating the weights of the fully-connected layer.

By that, if we only freeze the freeze the weights on the convolutional layers with param.requires_grad = False and only the fully-connected layer will have param.requires_grad = True, as seen here.

But here the tutorial “forces” the weight to be updated only in the list of parameters that requires_grad == True. Is this really necessary, once we froze the weights in all layers but the fully-connected? Couldn’t we simply have:

optimizer_ft = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)

I appreciate all comments on that.

It’s just to prevent “accidental” updates (e.g. by weight decay, if these parameters had valid gradients before) and will just skip there parameters in the update loop.

Have a look at this topic for more information.

1 Like

@ptrblck, thank you for the clarification. :slight_smile: