BUG: inception_v3 training error

Hi inference using the inception_v3 model from torchvision seems to be working in an unintended manner.
Judging by the error log, it seems that the error has to do with how batch norm is implemented

I’m using
python 3.5
pytorch 0.4.1
torchvision 0.2.1

import torch
import torchvision

model = torchvision.models.inception_v3()

inputs = torch.Tensor(1, 3, 299, 299)
outputs = model(inputs)
Traceback (most recent call last):
  File "scrapbook.py", line 7, in <module>
    outputs = model(inputs)
  File "/data/mingrui/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/data/mingrui/anaconda3/lib/python3.5/site-packages/torchvision-0.2.1-py3.5.egg/torchvision/models/inception.py", line 109, in forward
  File "/data/mingrui/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/data/mingrui/anaconda3/lib/python3.5/site-packages/torchvision-0.2.1-py3.5.egg/torchvision/models/inception.py", line 308, in forward
  File "/data/mingrui/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/data/mingrui/anaconda3/lib/python3.5/site-packages/torchvision-0.2.1-py3.5.egg/torchvision/models/inception.py", line 326, in forward
  File "/data/mingrui/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "/data/mingrui/anaconda3/lib/python3.5/site-packages/torch/nn/modules/batchnorm.py", line 66, in forward
    exponential_average_factor, self.eps)
  File "/data/mingrui/anaconda3/lib/python3.5/site-packages/torch/nn/functional.py", line 1251, in batch_norm
    raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))
ValueError: Expected more than 1 value per channel when training, got input size [1, 768, 1, 1]

My bad, there isn’t actually any bugs

import torch
import torchvision

model = torchvision.models.inception_v3()

inputs = torch.Tensor(2, 3, 299, 299)
outputs = model(inputs)

The batch size I tested with is just 1.

why the batch-size cannot be 1?
I think it is still a bug?

During inference it can be 1, during training and it cannot as the batch norm layers cannot compute the stats from a single sample.

2 Likes