How to freeze BN layers while training the rest of network (mean and var wont freeze)

I cannot reproduce this issue using this code snippet and the running stats are equal after calling norm.eval():

norm = nn.InstanceNorm2d(num_features=3, track_running_stats=True)
print(norm.running_mean, norm.running_var)
> tensor([0., 0., 0.]) tensor([1., 1., 1.])

x = torch.randn(2, 3, 24, 24)

out = norm(x)
print(norm.running_mean, norm.running_var)
tensor([-0.0029,  0.0005,  0.0003]) tensor([0.9988, 1.0021, 0.9980])

out = norm(x)
print(norm.running_mean, norm.running_var)
> tensor([-0.0056,  0.0010,  0.0006]) tensor([0.9978, 1.0040, 0.9962])

norm.eval()
out = norm(x)
print(norm.running_mean, norm.running_var)
> tensor([-0.0056,  0.0010,  0.0006]) tensor([0.9978, 1.0040, 0.9962])

Are you using an older PyTorch version, where this could have been a known issue?