ValueError: Target size (torch.Size([16, 19])) must be the same as input size (torch.Size([16, 432]))

I am taking part in a Multi-Label Classification problem I have created my complete training and validation pipleline but I can’t train because of this error:
ValueError: Target size (torch.Size([16, 19])) must be the same as input size (torch.Size([16, 432]))
I am using the nn.BCEWithLogitsLoss() loss function.

Here is my dataset class:

class HPADataset(Dataset):
    def __init__(self, train_df, train_dir, transform=None):
        self.df = train_df
        self.train_path = train_dir
        self.transform = transform
        self.filter = ['blue', 'yellow', 'red']
        
    def __len__(self):
        return len(self.df)
    
    def __getitem__(self, item):
        name = self.df.ID[item]
        img = [cv2.imread(os.path.join(self.train_path, f'{name}_{filter_}.png'), cv2.IMREAD_GRAYSCALE) for filter_ in self.filter]
        img = np.stack(img, axis=-1)
        target = self.df.Label[item]
        target = encode_label(target)
        
        if self.transform:
            img = self.transform(image=img)
            img = img["image"]
#             img = np.transpose(img, )
            
        return {
            "image": torch.tensor(img, dtype=torch.float32),
            "target": target
        }

The data looks like this:
image

and I have encoded the labels to look like this:
tensor([1., 0., 0., 0., 0., 1., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) (Essentially a one-hot encoding)
It would be great if someone could help me.
Thanks :slight_smile:

What model are you using?

1 Like

I tried resnet18 and efficientnetb0

Did you change the output classes to match the number of labels?

1 Like

Yes, I did that as well.

Ok can you send you training loop and print out the shape of your model output?

1 Like

Ohhh, I just checked. I was not giving the correct number of classes to the model. Now it’s working.
Thanks a lot!

No problem glad I could help.

1 Like