Slightly different results for different batch sizes

I am using a VGG19 network to classify CIFAR10 images, yet I have slightly different results for data of different batch sizes. Code:

f_image = net.forward(Variable(image, requires_grad=True)).data

f_image2 = net.forward(Variable(image[0].unsqueeze(0), requires_grad=True)).data

f_image3 = net.forward(Variable(image[0].unsqueeze(0), requires_grad=True)).data
print(‘-----------------------------------------’)
print(‘f_image - f_image2 = {}’.format((f_image[0] - f_image2).cpu().data.numpy()))
print(‘f_image - f_image3 = {}’.format((f_image[0] - f_image3).cpu().data.numpy()))
print(‘f_image2 - f_image3 = {}’.format((f_image2 - f_image3).cpu().data.numpy()))
print(‘-----------------------------------------’)

Specifically, image is of size [batch_size, n_channels, height, width]. And I expect f_image[0].unsqueeze(0) to be the same as f_image2 and f_image3. However, they are slightly different:


f_image - f_image2 = [[ 0.0000000e+00 3.7252903e-09 -1.8626451e-09 3.7252903e-09
0.0000000e+00 -3.7252903e-09 3.7252903e-09 -1.8626451e-09
3.7252903e-09 -1.3969839e-09]]
f_image - f_image3 = [[ 0.0000000e+00 3.7252903e-09 -1.8626451e-09 3.7252903e-09
0.0000000e+00 -3.7252903e-09 3.7252903e-09 -1.8626451e-09
3.7252903e-09 -1.3969839e-09]]
f_image2 - f_image3 = [[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
-----------------------------------------]

The difference is very small, is it a bug or expected behavior of PyTorch?

Values smaller than 1e-6 are due to the limited floating point precision, so you shouldn’t worry about it.

Unrelated to this, but Variables are deprecated since PyTorch 0.4.0, so you can just use tensors now.
Also, I wouldn’t recommend to use .data anymore, as this might have some negative side effects, since Autograd cannot track the changes made to the .data attribute. This is not really important in your current code, but might lead to some issues in the future. :wink:

But even for the same data sample (image[0] in my case), there are small differences. Does the rounding policy of PyTorch have different behaviors when it comes to different batch sizes?

Yes, even for the same data, you might run into floating point precision limits.
You could try to enable deterministic behavior as described here, but I’m not sure if your results will get more deterministic.

Thanks! Actually I have enabled the deterministic behavior. The difference still exists. The deterministic flags ensure that I have the same difference in different runs, but do not eliminate the differences.

So I guess PyTorch might have some “trade-off” policy for samples in the same batch. If you want maximal precision for the 1st sample in a batch, you might lose more precision in other samples of the batch. And the “trade-off” policy is why I slightly different results for different batch sizes. Is my understanding correct?

Could you help me to check my problem, Different batch sizes give different test accuracies, Thanks

hello,do you solve this problem? For different results, I am very affected in semantic segmentation, I got a very different result for different batch sizes during prediction