Resuming optimiers, dataloaders, datasets along with model

What is the best way to resume a training in pytorch.
Saving an optimizer would be necessary since some optimizers store the momentum of gradients along with the current gradients.
What is an elegant way to do this?

The most straightforward way I can think of for the optimizers is using state_dict() and load_state_dict().
For the dataloader there isn’t a way. I guess the implicit recommendation is to start from a fresh epoch (you could play games there, too, like caching the permutation of items it is currently going through and the position), but what should e.g. len(dl) return?

Best regards

Thomas

What unifying torch.save for optimizer and model, so that optimizer can be saved together with the model and loaded together
I am really feeling lazy for calling torch.save twice :stuck_out_tongue: . Although makes sense to do so (somewhat) in my opinion

There is a good example in the ImageNet demo.

Ohhh exactly what I was looking for, didnt know that torch.save works on a dictionary as well!!
Thanks :smiley: