Expected input batch_size(18) to match target batch_size (32)

I want to make a Discriminator (GAN) and below is my code,

  def __init__(self):
        super(_D, self).__init__()
        self.conv1 = nn.Conv2d(3, 128, kernel_size=3, stride=2, padding=1)
        self.batchNorm1 = nn.BatchNorm2d(128)
        self.leakyReLU1 = nn.LeakyReLU(negative_slope=0.2, inplace=True)
       
        self.conv2 = nn.Conv2d(128, 256, kernel_size=3, stride=2, padding=1)
        self.batchNorm2 = nn.BatchNorm2d(256)
        self.leakyReLU2 = nn.LeakyReLU(negative_slope=0.2, inplace=True)

        self.conv3 = nn.Conv2d(256, 512, kernel_size=3, stride=2, padding=1)
        self.batchNorm3 = nn.BatchNorm2d(512)
        self.leakyReLU3 = nn.LeakyReLU(negative_slope=0.2, inplace=True)

        self.conv4 = nn.Conv2d(512, 256, kernel_size=3, stride=2, padding=1)
        self.batchNorm4 = nn.BatchNorm2d(256)
        self.leakyReLU4 = nn.LeakyReLU(negative_slope=0.2, inplace=True)

        self.conv5 = nn.Conv2d(256, 128, kernel_size=3, stride=2, padding=1)
        self.batchNorm5 = nn.BatchNorm2d(128)
        self.leakyReLU5 = nn.LeakyReLU(negative_slope=0.2, inplace=True)

        self.conv6 = nn.Conv2d(128, 3, kernel_size=3, stride=1)

    def forward(self, input):
        print()
        x = self.leakyReLU1(self.batchNorm1(self.conv1(input)))
        print(x.size())
        x = self.leakyReLU2(self.batchNorm2(self.conv2(x)))
        print(x.size())
        x = self.leakyReLU3(self.batchNorm3(self.conv3(x)))
        print(x.size())
        x = self.leakyReLU4(self.batchNorm4(self.conv4(x)))
        print(x.size())
        x = self.leakyReLU5(self.batchNorm5(self.conv5(x)))
        print(x.size())
        x = self.conv6(x)

there is an error before x = self.conv6(x). and it’s said that

Expected input batch_size(18) to match target batch_size (32)

I am not really sure how to determine the batch size of my convolution, but the last thing I try is to use batch_size=2 and the num_workers=2, and here are the tensors’ size that I print after every convolution block
torch.Size([2, 3, 152, 152])
torch.Size([2, 128, 76, 76])
torch.Size([2, 256, 38, 38])
torch.Size([2, 512, 19, 19])
torch.Size([2, 256, 10, 10])
torch.Size([2, 128, 5, 5])

The error message seems to point to the loss calculation.
Could you check the shape of your model output and target and make sure the batch size is equal?

1 Like

Thank you for your reply, my target is torch.Size([32]), and now I’m making it equal by using 32, here are the shapes for the tensors’ size after every convolution block

torch.Size([32, 3, 152, 152])
torch.Size([32, 128, 76, 76])
torch.Size([32, 256, 38, 38])
torch.Size([32, 512, 19, 19])
torch.Size([32, 256, 10, 10])
torch.Size([32, 128, 5, 5])

now I use 32 as the batch_size and it gives me error

Expected input batch_size (288) to match target batch_size (32)

How to counter this? Thank you very much once again

I think I get the problem, you are right, because the channel of my target output and the model output is different, I need to make it the same. Thank you very much for your answer.