Simulate nn.BatchNorm2d of pytorch with python code(batch normalization in pytorch)

i want to simulate BatchNorm2d and write this code and compare the result with result of nn.BatchNorm2d function of pytorch when i print the output of my code to the output of nn.BatchNorm2d,each channels of the output tensor denoted by hk1,hk2 is a coefficient of the output channels of the nn.BatchNorm2d function.i want to understand what are these coefficients?

import torch
import torch.nn as nn

m1 = nn.BatchNorm2d(2)
inp1 = torch.tensor([[[[1.,0.,3.,2.]],[[0.,2.,0.,4.]]],[[[0.,3.,0.,2.]],[[0.,1.,0.,1.]]],[[[0.,5.,0.,2.]],[[0.,1.,3.,1.]]]])

ot0 = torch.mean(inp1,(0,2,3))
jazr0 = torch.var(inp1,(0,2,3))

gt1 = inp1[:,0,:,:]
rd1=gt1-ot0[0]
hk1 = rd1/jazr0[0]

gt2 = inp1[:,1,:,:]
rd2=gt2-ot0[1]
hk2 = rd2/jazr0[1]

print('****batchnorm2d****:\n',m1(inp1))

btch0 = m1(inp1)[:,0,:,:]
btch1 = m1(inp1)[:,1,:,:]
print(hk1/btch0)
print(hk2/btch1)

the output of this code is:

****batchnorm2d****:
 tensor([[[[-0.3216, -0.9649,  0.9649,  0.3216]],

         [[-0.8628,  0.7301, -0.8628,  2.3230]]],


        [[[-0.9649,  0.9649, -0.9649,  0.3216]],

         [[-0.8628, -0.0664, -0.8628, -0.0664]]],


        [[[-0.9649,  2.2514, -0.9649,  0.3216]],

         [[-0.8628, -0.0664,  1.5266, -0.0664]]]],
       grad_fn=<NativeBatchNormBackward>)
tensor([[[0.5897, 0.5897, 0.5897, 0.5897]],

        [[0.5897, 0.5897, 0.5897, 0.5897]],

        [[0.5897, 0.5897, 0.5897, 0.5897]]], grad_fn=<DivBackward0>)
tensor([[[0.7301, 0.7301, 0.7301, 0.7301]],

        [[0.7301, 0.7301, 0.7301, 0.7301]],

        [[0.7301, 0.7301, 0.7301, 0.7301]]], grad_fn=<DivBackward0>)

Process finished with exit code 0

I’m not sure I understand your code correctly, but if you would like to manually implement a batchnorm layer, you could use this code I’ve written some time ago.