I tried to find a solution to that in other threads but I cannot find a problem like mine.
I am training a feed-forward NN and once trained save it using:
torch.save(model.state_dict(),model_name)
Then I get some more data points and I want to retrain the model on the new set, so I load the model using:
model.load_state_dict(torch.load(‘file_with_model’))
When i start training the model again, the error increases a lot. To check if it was a problem of the new points or the way I’m loading the model, I saved a trained model and load it again to retrain over the same set of points. When doing this, the error on the very first epoch increases a lot with respect to the error on the trained model.
Is this normal? Should I do anything more when loading a model for retrain?
Thank you very much