# Predict same image but get different probability

Today, I find one strange thing that if I put one or many images into the model, but when check the outputs in different layers of the same images, the values of outputs are different.

1. The variable images contains 100 pictures, which have been normalized. Then the resnet18 model made a prediction, and I print the output of last layer. We can see the values of the last picture as follows:
tensor([[ 0.3845428824, -0.5180321932, 0.2798461318,-0.5726045370,0.1901987493, -0.2272544652, -0.8677281737, 0.4837094247,-0.0999206007,-0.1482645720]])
2. I only chose the last picture as input, and we can the the output is different from the values above
tensor([[ 0.3845427036, -0.5180319548, 0.2798460722,-0.5726044774,0.1901988089, -0.2272544056, -0.8677282333, 0.4837094545,-0.0999205410,-0.1482645869]])
3. I duplicate the last picture 100 times as input, and find the result is the same as result of 1).
[ 0.3845428824, -0.5180321932, 0.2798461318,-0.5726045370,0.1901987493,-0.2272544652,-0.8677281737, 0.4837094247,-0.0999206007,-0.1482645720]
4. Put the last three pictures into the net, the output is:
[0.3845428526,-0.5180321336,0.2798461318,-0.5726043582,0.1901988238,-0.2272545248,-0.8677282333,0.4837094247,-0.0999206007,-0.1482645124]
It seems that the size of input is he key reason of the output difference. I can’t understand this problem, espicially when I do the same things on tensorflow, there are no difference, in the condition of same batch_size.

Make sure to call `model.eval()` before running your test.
This will make sure to disable dropout layers and use the running estimates of batchnorm layers.
During training, dropout will of course alter your output and batchnorm layers will normalize the input batch using the current batch statistics and will also update the running estimates.

I’m not sure how TensorFlow handles it.

As you suggest, I add model.eval() and then run it, the condition changes but still get different probability. But this time, 2 and 3 become same: [-3.5615596771, -3.3041067123, 0.2262650430, -0.5413500071,
5.4653391838, 3.9844801426, -4.7832412720, 14.4310092926,
-6.6198024750, -6.1641607285].
while 1: [-3.5615587234, -3.3041062355, 0.2262653410, -0.5413498878,
5.4653387070, 3.9844801426, -4.7832412720, 14.4310083389,
-6.6198034286, -6.1641597748]
4 : [-3.5615592003, -3.3041067123, 0.2262655199, -0.5413499475,
5.4653387070, 3.9844801426, -4.7832412720, 14.4310073853,
-6.6198034286, -6.1641602516]]
1 and 2 and 4 are different, which means that the data in same batch influence the output.
Are there some mechanism in pytorch which bring the small changes? Thank you again!

It depends on the layers you are using in your model as well as e.g. if you set `cudnn` to use deterministic algorithms as described here.
If the absolute difference is in the range ~1e-6, it’ll be most likely due to the limited floating point precision.