Deeplabv3 resnet wrong input size error

I am using the deeplabv3+ resnet 101 to perform semantic segmentation. I was using the FCN_resnet101 model and everything was working, but when switching the model to deeplabv3_resnet101 gives me this following error

File "train.py", line 45, in <module>
    outt = model(i)
  File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torchvision/models/segmentation/_utils.py", line 22, in forward
    x = self.classifier(x)
  File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
    input = module(input)
  File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torchvision/models/segmentation/deeplabv3.py", line 91, in forward
    res.append(conv(x))
  File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torchvision/models/segmentation/deeplabv3.py", line 60, in forward
    x = super(ASPPPooling, self).forward(x)
  File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
    input = module(input)
  File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 83, in forward
    exponential_average_factor, self.eps)
  File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 1693, in batch_norm
    raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 256, 1, 1])

my input size is actually [1 3 417 417].

Any and all help is appreciated

Try to pass more than a single sample to your model while training.
The batch norm layer complains, since the activation only contains a single pixel and a single sample.

3 Likes