I’m saving my model state_dict at every epoch.
When overfitting, if I load back my state_dict from a previous epoch it seems like the train accuracy continues to go up as if nothing happened. However, if I stop the training and restart from that previous state_dict it works.
While training:
Epoch 30 - Train Acc 70%
…
Epoch 40 - Train Acc 85%
load epoch 30 weights
Epoch 41 - Train Acc 86%
If I restart the notebook:
load epoch 30 weights
Epoch 41 - Train Acc 71%
How can I completely go back to the state of a certain epoch while training without interrupting training. I just want the weights from the previous epoch, not the momentum etc…