Loading the mode and training is same as keeping it training?

Let’s say I have one autoencoder model that should be trained on 20 epochs. (any model can be done)
Does training 20 epochs straight is the same as saving model at 10 epochs and re-run it on for 10 epochs?

I had set the seed part like this.

torch.manual_seed(0)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(0)

I am using the “load_state_dict” method for loading the model. What I’m concerning is that loading the model could affect the model performance

It doesn’t affect the performance if you properly do it. Models are supposed to be robust to randomness.
The fact is that you also have to save optimizer state_dict as some of them has running parameters which affects loss, eg, Adam.

If you don’t do so, you will observe a peak in the loss (which should be recovered).

1 Like