Performance different after loading saved models

In my training script, I train a LSTM model and evaluate it after each epoch. After evaluation, I save the model using torch.save (I have tried saving both state_dict or the entire model) and reload it using torch.load (in the same training script, before continuing to the next epoch). The performance of the loaded model is the same as the saved model. However, when I run another script for evaluation which use the same code I used in the training script for loading the saved model, the performance is substantially different (accuracy drops from 70% to 10%). I’ve tried various methods for fixing the randomness using the techniques suggested in this page: Reproducibility — PyTorch 1.7.0 documentation

Is there any suggestions about what could be the cause of this huge difference in the performance of the loaded model in the same training script and a separate evaluating script?

This should not happen.

If you share your code, we can have a look.

Thanks, You are right. The problem was not due to randomness in PyTorch. I had a simple python bug. I was iterating a set and using the order of its items in my code.