Optimizer lr changing after loading state dict

After I load my optimiser state dict when a previously run session with a different lr, the new optimizer’s lr also changes.

eg)

lr=0.01
opt = torch.optim.Adam(model.parameters(), lr=lr, betas=(0.9, 0.999), eps=1e-08, weight_decay=weight_decay) 
for groups in opt.param_groups: print(groups['lr']); break
opt.load_state_dict(torch.load(load_saved)['optimizer']
for groups in opt.param_groups: print(groups['lr']); break

prints

0.01
0.05

Is this the correct behavior? I am reloading previous stats but lowering lr to continue training

I assume the learning rate was stored in the state_dict and thus loaded in:

opt.load_state_dict(torch.load(load_saved)['optimizer']

which I would consider the correct behavior to restore the previous session.
If you want to change it after loading the state_dict, you could change it directly in the param_group.

Let me know, if I misunderstood the question.