Problem about nn.BatchNorm2d's output

Hi , I have just met a paroblem with nn.BatchNorm2d layer

due some reasons, I have to copy one BN-layer’s weight to an other . when I do this I just do like = =

and after doing this , I check these two layer’s weight and bias , they are same.

and also, when print this two layers , they are both shown as

BatchNorm2d(16,eps=1e-05,momentum =0.1,affine = True,track_running_state = True)

However, when I put the same input to these two layers, the outputs of them are different!

I am not clear about what’s wrong, what other parts should I check about these two layers?

thank you very much.

ok ,

I think I should consider about running_mean and running_var
when I copy these two as well , the problem has been solved