`num_features` parameter of `nn.InstanceNorm2d` does not change results

I was wondering what would happen if the num_features specified for nn.InstanceNorm2d mismatch that of the number of features in the input and thus decided to test it. I learned that the num_features parameter does not really matter as the results remain the same.

Why doesn’t num_features matter? If it doesn’t matter, why is the parameter not optional? (I know there is a nn.LazyInstanceNorm2d which dynamically detect the input feature size.)

Thanks for your insight,

Minimal working code:

import torch
import torch.nn as nn
import sys

print(f"Python version: {sys.version}; Torch version: {torch.__version__}")

input = torch.randint(low=0, high=255, size=(5, 3, 256, 256)).double()
output_base = nn.InstanceNorm2d(num_features=1)(input)

for i in range(2, 20):
    IN_layer = nn.InstanceNorm2d(num_features=i)
    
    output = IN_layer(input)
    result_check_same = (output_base == output).all()
    
    print(i, result_check_same)

    if not result_check_same:
        break

Output:

Python version: 3.10.8 | packaged by conda-forge | (main, Nov 22 2022, 08:25:13) [Clang 14.0.6 ]; Torch version: 2.3.0.post100
2 tensor(True)
3 tensor(True)
4 tensor(True)
5 tensor(True)
6 tensor(True)
7 tensor(True)
8 tensor(True)
9 tensor(True)
10 tensor(True)
11 tensor(True)
12 tensor(True)
13 tensor(True)
14 tensor(True)
15 tensor(True)
16 tensor(True)
17 tensor(True)
18 tensor(True)
19 tensor(True)

A warning will be raised from here since you are using the default affine=False argument. If you use affine parameters, the code will fail with a shape mismatch.

Thank you, I realized that I was using an older version of PyTorch that did not have the warning. The warning is very informative.