Skip/freeze/omit learnable parameters

Hello community,

in order to freeze some parameter I could set the feature param.requires_grad to False. However my question is about optimizer, for example:

optimizer = torch.optim.Adam([
    {'params': model.feature_extractor.parameters()},
    {'params': model.covar_module.parameters()},
    {'params': model.mean_module.parameters()},
], lr=0.01)

If I leave out my model parameters in the optimizer definition and leaved out parameters are not set to param.requires_grad = False, will these parameters be changed?

Thank a lot!

No, but they will accumulate gradients and using computation and memory to do so, which may or not be significant. You might be in for surprises when you do add them later.

Best regards

Thomas

why?

What is the best way to leave out the parameters? With means of param.requires_grad = False or leaving out in optimizer definition?

Thanks!

To permanently avoid gradients/updates doing both would be cleanest because you don’t rely in context (à la “we calculate grads, but they are not in the optimizer” or “we pretend to optimizer but don’t calculate grads”) - the Python Zen says explicit is better than implicit. If for some reason (distributed?) no grad becomes grad zero later, you don’t have regularisation messing with your params etc.

Best regards

Thomas