Trained model is not giving the always the same result for the same input

Hello guys,
So I trained a model and I trained it but after wards during inference time I do not get always the same result but this is just for some images. I find this strange and I did not find any logical explanation for this.
Any idea would be appreciated.
Thank you in advance.

Do you have any dropout layers enabled during inference? Make sure the model is in eval() mode.

thank you for your answer but I do not have any dropout layer.
I cannot seem to find the reason that I am getting that.


You may have batch normalization layers. If you have, model.eval() followed by a context block of torch.no_grad() is the proper way of doing the evaluation. Otherwise, new batch statistics will be used in the evaluation phase(/ test phase) which will produce different results.