Maybe my answer here is too basic but this is how I generally do this:
Load the model: from model_location import model
Instantiate the model: m = model()
Load the state dictionary required: m.load_state_dict(torch.load(state_dictionary_saved_at_epoch_7.pt'))
(as you have saved it as saved.pt you might have to preload this dictionary and then replace state_dictionary_saved_at_epoch_7.pt' with whatever is in 'state_dict': model.state_dict() at epoch 7. (perhaps this is the subtelty of your question that most of this answer will be not addressing properly?); you will also have to push the optimizer state dictionary and loss value too but I don’t know how to do that.)
Set the training state of the model m to true (perhaps this is done by default): m.train(True)
…and then continue training using the model now called m in this example.
Thank you for you explanation . but im not getting over How to replace ‘state_dictionary_saved_at_epoch_7.pt’ from ‘saved.pt’ and all below, also code example.
from your_model import model # Replace your_model with the name of your model class
m = model() # Create a new model
state = torch.load(saved.pt) # Load the whole dictionary, as that's what you have saved
m.load_state_dict(state[state_dict])
# There is no need for m.train() if you want to continue with training
P.S.
You are always saving the state in the same file. This leads to only the newest model state to be saved.
Yes, loading a saved state dictionary loads pretrained weights so it won’t be starting from new.
Just note that with this method, be careful not to change the model architecture and then try to load the state dictionary to a new model as obviously they will no longer share the same shapes…something that has stung me in the past.