I have a general question regarding saving and loading models in PyTorch.
My case:
I save a checkpoint consisting of the model.state_dict, optimizer.state_dict, and the last epoch.
The saved checkpoint refers to the best performing model, evaluated by accuracy.
I load all the three checkpoint entries and resume…However, I do not want to continue training but I want to use the saved state and make one forward pass to get the same accuracy as I had when I saved the checkpoint. How can I do that?
Basically, I want to be able to reproduce my results, since I have not figured out how to seed in PyTorch. It somehow does not really work…So I figured I can do it by saving and loading models.
first of all here you can find a short documentation about reproducibility in PyTorch.
I think your results could be reproduced, if you’re able to load the same data as in the original evaluation (e.g. your validation dataset) and you haven’t used any random operations in your first evaluation, like Dropout as an example.
How different are your current results from the original one?
I am aware of the different seeding functions in PyTorch and I have used them. Somehow my results differ in the range of roughly 1 - 2%. My model does not use Dropout…