Suppose I have a model M with some parameters P. I train this using Adam and save the state_dict of the model and the optimizer. I now add few more parameters in the model to make it Mn and the parameters are Pn.
To load the variables from the partial model that is saved I do the following
state = Mn.state_dict() lstate = torch.load(model_path) state.update(lstate) Mn.load_state_dict(state)
This ensures that the old variables are loaded from the saved model and the new ones are in their initialized state.
If I try to do the same with the optimizer, it complains that the number of parameters in the new optimizer’s state_dict() is more than the loaded state dict.
What is the recommended method to restore the partial optimizer variables?