Arguments of model won't update when training from a checkpoint?

I traind a model for 25 epochs and saved the model and optimizer like this:

checkpoint = {'model': trainer.model.state_dict(),
              'optim': trainer.optimizer,
              'epoch': epoch}
torch.save(checkpoint, PATH))

Now I want to load this checkpoint and continue to train the model for 50 epochs:

model.load_state_dict(checkpoint['model'])
optimizer = checkpoint['optim']

I didn’t change other parts of the code and the program runs good without any exception. But later I noticed that the arguments of the model won’t update, even when I use model.train() and criterion.backward(). Anybody knows why?

The imagenet example code implements checkpointing slightly differently.

They save optimizer.state_dict() rather than the optimizer itself.

You could try that.