Difference between nn.BatchNorm2d.forward() and calculation using torch.Tensor

Hi,

I find there is slight difference between the calculation results of nn.BatchNorm2d.forward() and torch.Tensor. Specifically, If I run the following code:

import torch
import torch.nn as nn

bn = nn.BatchNorm2d(10)

# feed data to update running_mean and running_var
for _ in range(100):
    dummy_data = torch.randn(8,10,100,100)
    _ = bn(dummy_data)

# now test
bn.eval()

#result of layer forward
data_in = torch.randn(8,10,100,100)
output_bn = bn(data_in)

#result of manual calculation
mean = bn.running_mean.view(1, 10, 1, 1)
var = bn.running_var.view(1, 10, 1, 1)
weight = bn.weight.view(1, 10, 1, 1)
bias = bn.bias.view(1, 10, 1, 1)
eps = bn.eps

data_in_norm = (data_in - mean) / torch.sqrt(var+eps)
output_mannual_calcualte = weight*data_in_norm + bias

#calculate error
err = (output_bn - output_mannual_calcualte).abs()
print(torch.max(err))

It will print something like:

tensor(4.7684e-07, grad_fn=<MaxBackward1>)

Basically, it means the results are slightly different between the forward function of nn.BatchNorm2d and manually doing the same calculation using torch.Tensor. I wonder if there is something wrong in my manual calculation? Am I missing something here? Thank you very much for any suggestions.

Your calculation looks correct the the small absolute error points to the floating point precision limitation, which is expected. In case you need to get a lower error, you could use float64 as the dtype.

Yes the error dropped to 1e-16 level with double precision. Thank you :blush: