Why does not the output of InstanceNorm2d have unit variance?

Hi,

I tested nn.InstanceNorm2d in v1.0 and observed that the output tensor of it does not have unit variance. Below, I wrote the test code for both standardization and instance normalization. The nn.InstanceNorm2d with the affine=False argument should return the output with channel-wise unit variance.

In addition, the nn.InstanceNorm2d does not raise an error even if the dimensions of the input do not match. nn.BatchNorm2d raise an error if the dimensions of the input do not match. Is this intended?

Thanks,

Yunjey

def standardize(x, eps=1e-6):
    N, C, H, W = x.size()
    x = x.view(N, C, H*W)
    mean = torch.mean(x, dim=2, keepdim=True)
    std = torch.std(x, dim=2, keepdim=True)
    out = (x - mean) / (std + eps)   # (N, C, H*W)
    return out

# Test with standardization
x = torch.rand(1, 2, 3, 3)
out = standardize(x)
print('var: ', torch.var(out, dim=2))      # [1.0, 1.0]

# Test with InstanceNorm2d
norm = nn.InstanceNorm2d(2, affine=False)
out = norm(x)
N, C, H, W = out.size()
out = out.view(N, C, H*W)
print('var: ', torch.var(out, dim=2))      # [1.1248, 1.1249]


# Dimension not matched
norm = nn.InstanceNorm2d(444, affine=False)
x = torch.randn(2, 3, 3, 3)
out = norm(x)      # This does not raise an error

norm = nn.BatchNorm2d(444, affine=False)
x = torch.randn(2, 3, 3, 3)
out = norm(x)      # This raise an error

I found that setting torch.var(out, dim=2, unbiased=False) solves the first problem.