I’m using the pre-trained Inception-V3 model from torchvision, and my input tensor to the model has a shape of N,3,299,299. However, I’ve noticed that changing the value of N in the input tensor affects the model’s predictions. I’ve already switched the model to evaluation mode before making predictions. For example, in the screenshot, when I set N to 4 and compare the predictions from the first row of the model’s output with when N is set to 1, they are not the same."

How large is the actual error as you might observe the expected small errors caused by the limited floating point precision?

Thank you for your response. I have observed that when N changes, most of the predicted values from the model show some variation starting from the third decimal place. Could this be caused by floating-point precision? Both the model and the input tensor are of data type float32.