Why doesn't BatchNorm on constant fail?

I would expect BatchNorm on constant values (with zero eps) to somehow fail (throw some exception), but apparently it just outputs zeros…

Could somehow please explain what exactly happens when calling batchnorm?

Here’s a code sample:

x = torch.ones(2, 2, 3)

bn = torch.nn.BatchNorm1d(2, affine=False, eps=0)

bn(x)
>>> tensor([[[0., 0., 0.],
            [0., 0., 0.]],
    
           [[0., 0., 0.],
            [0., 0., 0.]]])

I think the C code can help explain it, when you are running forward pass of batchnorm you need to set options.stateful_ = True and due to which the running variance is set to torch.ones.

I’m not sure that I see the relation to running variance. The example is in train (default) mode whereas running variance only sets into action on eval mode.