I think the C code can help explain it, when you are running forward pass of batchnorm you need to set options.stateful_ = True and due to which the running variance is set to torch.ones.
I’m not sure that I see the relation to running variance. The example is in train (default) mode whereas running variance only sets into action on eval mode.