How to set optimizer after loading new state_dict during training?

In my work, I need to load external state_dict to the network during training, e.g.,

network.load_state_dict(new_param)

But after doing this, what do I need to do with the optimizer to ensure it optimizes the new_param?
Since the optimizer is initialized like,

optim.SGD(params=network.parameters(), lr=0.1)

After loading new state_dict, I guess the parameters in network and optimizer are no longer the same?
Thank you for any advices.

Hi,

I try to output optimizer.state_dict() but got this one:

{'state': {}, 'param_groups': [{'params': [2131814292216, 2131814292288, 2131814292360], 'nesterov': False, 'weight_decay': 0, 'dampening': 0, 'momentum': 0, 'lr': 0.1}]}

And after backward and have a update, optimizer.state_dict() doesn’t have a change.

model = LinearNetwork()
random_input
random_output
optimizer = SGD(params = model.parameters(), lr=0.1)
loss = MSELoss()
optimizer.zero_grad()
output = model(random_input)
loss = loss(output, random_output)
loss.backward()
optimizer.step()

Do you have any ideas?

1 Like