I trained my model and saved it. When I load the checkpoints from model.pth file and try to get output for different runs I am getting different outputs for the same input. Should not the weights and biases to be same for each testing of the model?
Thanks for your help!
Yes and you can manually verify it by accessing and comparing the parameters of both models.
Did you disable any randomness via
No I did not. After doing it I have same output in every run. When I load saved model checkpoints why I have randomness?
You are most likely using some layers, which change their behavior between
.eval(), such as
nn.Dropout layers which will be disabled during
model.eval(). It’s unrelated to loading a checkpoint and is just the model’s training state.