RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of size: : [32]

I’m trying to break down the error I’m getting from the loss function. My train function is below:

model = model.to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=LR)
criterion = nn.CrossEntropyLoss()

for epoch in range(2):
    for idx, (img, label) in enumerate(train_dl):
        print(img.shape, label.shape)
        
        img = img.to(device)
        label = label.to(device)
        
        optimizer.zero_grad()
        outputs = model(img)
        
        print(outputs.shape)
        loss = criterion(outputs, label)
        break

The out put is as follows:

img.shape: torch.Size([32, 3, 224, 224]), label.shape torch.Size([32])
outputs.shape: torch.Size([32, 64, 27, 5])

What am I doing wrong? Does it have something to do with my label shape?

Your output in the shape [32, 64, 27, 5] indicates a multi-class segmentation using 64 classes. For this use case the target should have a shape of [32, 27 ,5] containing class indices in the range [0, 63].

If you are not working on a segmentation but a multi-class classification as the target shape of [32] indicates, make sure that your model output has the shape [batch_size, nb_classes].