PyTorch error when batch size is 1: ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 256, 1, 1]

Below is my deeplab model:

""" DeepLabv3 Model download and change the head for your prediction"""
from torchvision.models.segmentation.deeplabv3 import DeepLabHead
from torchvision import models


def createDeepLabv3(outputchannels=1):
    """DeepLabv3 class with custom headwor
    Args:
        outputchannels (int, optional): The number of output channels
        in your dataset masks. Defaults to 1.

    Returns:
        model: Returns the DeepLabv3 model with the ResNet101 backbone.
    """
    model = models.segmentation.deeplabv3_resnet101(pretrained=True)
    model.classifier = DeepLabHead(2048, outputchannels)
    # Set the model in training mode
    model.train()
    return model

When i select input size of ([2,3,360,480]) everything works fine, but when i change the batch size to ([1,3,360,480]), it throws the error: " _ = train_model(model, File “/home/dsingh/Desktop/segmentation/deeplabv3/DeepLabv3FineTuning/trainer.py”, line 50, in train_model outputs = model(inputs) File “/home/dsingh/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/torch/nn/modules/module.py”, line 1102, in _call_impl return forward_call(*input, **kwargs) File “/home/dsingh/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/torchvision/models/segmentation/_utils.py”, line 29, in forward x = self.classifier(x) File “/home/dsingh/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/torch/nn/modules/module.py”, line 1102, in _call_impl return forward_call(*input, **kwargs) File “/home/dsingh/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/torch/nn/modules/container.py”, line 141, in forward input = module(input) File “/home/dsingh/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/torch/nn/modules/module.py”, line 1102, in _call_impl return forward_call(*input, **kwargs) File “/home/dsingh/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/torchvision/models/segmentation/deeplabv3.py”, line 92, in forward _res.append(conv(x)) File “/home/dsingh/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/torch/nn/modules/module.py”, line 1102, in _call_impl return forward_call(*input, **kwargs) File “/home/dsingh/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/torchvision/models/segmentation/deeplabv3.py”, line 62, in forward x = mod(x) File “/home/dsingh/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/torch/nn/modules/module.py”, line 1102, in _call_impl return forward_call(*input, **kwargs) File “/home/dsingh/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/torch/nn/modules/batchnorm.py”, line 168, in forward return F.batch_norm( File “/home/dsingh/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/torch/nn/functional.py”, line 2280, in batch_norm _verify_batch_size(input.size()) File “/home/dsingh/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/torch/nn/functional.py”, line 2248, in _verify_batch_size raise ValueError(“Expected more than 1 value per channel when training, got input size {}”.format(size)) ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 256, 1, 1])".


```tensor = torch.ones((1,3,360,480), dtype=torch.float32) model(tensor)```
 this throws the above error, however the below one works fine. I am confused!!!

```tensor = torch.ones((2,3,360,480), dtype=torch.float32) model(tensor)```

The input is RGB image, used for segmentation.

The models expect a 4 dimensional input (batch, channels, height and width).

Hello, I think it doesn’t work because of Batch Normalization