InstanceNorm1d returns 0-filled tensor to 2D tensor.This is because InstanceNorm1d reshapes inputs to(1, N * C, ...) from (N, C,...) and this makesvariances 0

Hi. I’m using a unit batch-size and following this discussion, [Error: Expected more than 1 value per channel when training - #2 by ptrblck] (Error: Expected more than 1 value per channel when training - #2 by ptrblck) I can switch to InstanceNorm.
But this is the error i get when using

x =  torch.randn(1, 512)
batch_norm = nn.BatchNorm1d(512)
inst_norm = nn.InstancceNorm1d(512)
batch_norm(x)
>> ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 512])
inst_norm(x)
>>  InstanceNorm1d returns 0-filled tensor to 2D tensor.This is because InstanceNorm1d reshapes inputs to(1, N * C, ...) from (N, C,...) and this makesvariances 0.
x = torch.randn(32, 512)
out = batch_norm(x)
out.shape
>> torch.Size([32, 512])
     File "/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/miniconda3/lib/python3.9/site-packages/torch/nn/modules/instancenorm.py", line 56, in forward
    self._check_input_dim(input)
  File "//miniconda3/lib/python3.9/site-packages/torch/nn/modules/instancenorm.py", line 132, in _check_input_dim
    raise ValueError(
ValueError: InstanceNorm1d returns 0-filled tensor to 2D tensor.This is because InstanceNorm1d reshapes inputs to(1, N * C, ...) from (N, C,...) and this makesvariances 0.

Hi @jpainam

The manual says that nn.InstanceNorm1d works with 3d tensors, your example of x has two dimensions.

Have you tried 3d tensors?

Seems like it doesn’t work with 2d inputs but according to manual it should.

I’ve found this issue that says nn.InstanceNorm1d - Should be updated to support 1D or 2D inputs without checkbox filled. It might be the case that this operation doesn’t work with 2d inputs for now.

I did a training with an nn.InstanceNorm1d-layer and when I run it with a batch of inputs it works fine.:
for example [5,40,100]
but when I test it with a batch containing only one sample [1,40,100] it shows the same error message as you.

This is probably a bug

1 Like