As the title, I have used torch.save(model.state_dict())
when try to save the model. I checked that only weights and biases should be saved. At that time, other parameters such as optimizer were not saved during training. How to continue training? Try to use model.load_state_dict(torch.load(***.pkl)
, but I don’t know then. It’s best to use the relevant code for reference.
This tutorial explains how to store and load state_dict
s and general checkpoints. Once you’ve loaded the model’s and optimizer’s state_dict
s you could continue with your training as was you’ve done before.
Hi, but I haven’t save the optimizer’s state_dicts, only weights and bias. Could I continue to train with what I have saved?
Yes, you could continue training with it, but would most likely get a different loss curve compared to the “full” run without saving and restoring the training (if your optimizer uses internal buffers).
If you load only the model’s state_dict
your use case would be similar to the standard fine-tuning approach.