Target size (torch.Size([12])) must be the same as input size (torch.Size([12, 1000]))

I am using models.vgg16(pretrained=True) model for image classification, where number of classes = 3.

Batch size is 12 trainloader = torch.utils.data.DataLoader(train_data, batch_size=12, shuffle=True) since error says Target size (torch.Size([12])) must be the same as input size (torch.Size([12, 1000]))

I have changed last fc layer parameters and got last FC layer as Linear(in_features=1000, out_features=3, bias=True)

Loss function is BCEWithLogitsLoss()

criterion = nn.BCEWithLogitsLoss()
optimizer = optim.SGD(vgg16.parameters(), lr=0.001, momentum=0.9)

Training code is

        # zero the parameter gradients
        optimizer.zero_grad()
        outputs = vgg16(inputs)               #----> forward pass
        loss = criterion(outputs, labels)   #----> compute loss
        loss.backward()                     #----> backward pass
        optimizer.step()                    #----> weights update

While computing loss, I get this error Target size (torch.Size([12])) must be the same as input size (torch.Size([12, 1000]))

Your output looks like a an output for

  • a multi-class classification use case, where you could keep the shapes and use nn.CrossEntropyLoss as the criterion (if target contains the class indices)
  • or a multi-label classification (zero, one or more samples can be valid in each sample), in which case you could keep the criterion, but would have to provide a target tensor in the same shape as the model output.