Failing to restart training

Currently I try to checkpoint a model during training then stop the training and then restart it, but it seems that the loaded model is different.

The saved model has a validation loss of 134 while if I load the model i get 245, has someone else have a similar issue ?

PS. Yes I do also save the optimizer state_dict

Updated pytorch and the problem disappeared.