Expected more than 1 value per channel when training, got input size torch.Size([1, 40])

I am trying to create a CNN-network and I follow the article here and I use batchnorm-layers here and there.
When training my network with the batch sizes equal to 1, I get this error:

Traceback (most recent call last):

  File "D:\Jupiter_playground\fashion_mnist_tidied.py", line 1387, in <module>
    main()

  File "D:\Jupiter_playground\fashion_mnist_tidied.py", line 1351, in main
    preds_dev = network(images_dev)

  File "C:\Users\Admin\.conda\envs\pytorch_env\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)

  File "D:\Jupiter_playground\fashion_mnist_tidied.py", line 853, in forward
    x = self.decoder(x)

  File "C:\Users\Admin\.conda\envs\pytorch_env\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)

  File "D:\Jupiter_playground\fashion_mnist_tidied.py", line 801, in forward
    return self.fc_blocks(x)

  File "C:\Users\Admin\.conda\envs\pytorch_env\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)

  File "C:\Users\Admin\.conda\envs\pytorch_env\lib\site-packages\torch\nn\modules\container.py", line 117, in forward
    input = module(input)

  File "C:\Users\Admin\.conda\envs\pytorch_env\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)

  File "C:\Users\Admin\.conda\envs\pytorch_env\lib\site-packages\torch\nn\modules\container.py", line 117, in forward
    input = module(input)

  File "C:\Users\Admin\.conda\envs\pytorch_env\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)

  File "C:\Users\Admin\.conda\envs\pytorch_env\lib\site-packages\torch\nn\modules\batchnorm.py", line 131, in forward
    return F.batch_norm(

  File "C:\Users\Admin\.conda\envs\pytorch_env\lib\site-packages\torch\nn\functional.py", line 2012, in batch_norm
    _verify_batch_size(input.size())

  File "C:\Users\Admin\.conda\envs\pytorch_env\lib\site-packages\torch\nn\functional.py", line 1995, in _verify_batch_size
    raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))

ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 40])

However, when I change the batch size to 2 then training proceeds and no error occurs. This is an odd behaviour as the model should work with any batch sizes including the size of 1. Could anybody please take a look at the code and hopefully spot the problematic spot?

Hi,

This happens because you use a batchnorm on an input with no spacial size: An input of size (1, 40) has 40 channels only.
And batchnorm compute normalization for each channel. In this case, since there is a single element, then the renormalization by the standard deviation will divide by 0 and would produce nan. This is why we raise an error in this case.

Layers like batchnorm only work when you have a batchsize > 1 or you have to use it in evaluation mode to make it use the saved statistics.

1 Like