Today, I find one strange thing that if I put one or many images into the model, but when check the outputs in different layers of the same images, the values of outputs are different.
- The variable images contains 100 pictures, which have been normalized. Then the resnet18 model made a prediction, and I print the output of last layer. We can see the values of the last picture as follows:
tensor([[ 0.3845428824, -0.5180321932, 0.2798461318,-0.5726045370,0.1901987493, -0.2272544652, -0.8677281737, 0.4837094247,-0.0999206007,-0.1482645720]])
- I only chose the last picture as input, and we can the the output is different from the values above
tensor([[ 0.3845427036, -0.5180319548, 0.2798460722,-0.5726044774,0.1901988089, -0.2272544056, -0.8677282333, 0.4837094545,-0.0999205410,-0.1482645869]])
- I duplicate the last picture 100 times as input, and find the result is the same as result of 1).
[ 0.3845428824, -0.5180321932, 0.2798461318,-0.5726045370,0.1901987493,-0.2272544652,-0.8677281737, 0.4837094247,-0.0999206007,-0.1482645720]
- Put the last three pictures into the net, the output is:
It seems that the size of input is he key reason of the output difference. I can’t understand this problem, espicially when I do the same things on tensorflow, there are no difference, in the condition of same batch_size.
Why it happens? Please help me to find out which caused this on pytorch, thank you!