Add parameters to optim.Adam during training

when I initialize a parameter from torch.optim — PyTorch 1.12 documentation, i would do it like

optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)

where the model has been defined beforehand.
Let us assume that during training, I want to add some parameters to the training, e.g. add a module to nn.Sequential.

Can I add a parameter to optimizer that is considered in optimizer.step() or do I have to re-initialize the optimizer? In the last case, all information about the momentum would be lost.

Yes, you can use optimizer.add_param_group.

1 Like