Using and not using model.eval()

When I use model.eval() with torch.no_grad() during my evaluation, I get 36% accuracy. But, if I do not use model.eval() and only use torch.no_grad() then my accuracy is 87%. Why is this hapenning?
with_model_eval

without_model_eval

This might be happening if you are using e.g. batchnorm layers, with “bad” running stats for the mean and var. If you use the model in training mode during evaluation, the batch statistics will be used and the running stats updated (which could be seen as a data leak for future model.eval() validation runs).
You could play around with the momentum to smooth the update of the running stats and check other posts in this forum, as other users are also facing this issue.