One hot encoding for multi label classification using BCEWithLogitsLoss() loss

I am using resnet18 with BCEWithLogitsLoss()
and i am encoding my labels using
y_onehot = nn.functional.one_hot(labels, num_classes=3)
y_onehot = y_onehot.float()

Which is I think not true for multi label data.
How should I encode my labels to get multi labels

Hello Irfan!

What is the shape of the output of your network (that you will
pass as the input to your BCEWithLogitsLoss loss function)?

What is the shape of labels, and what are typical values of the
labels tensor, and what do they mean conceptually?


K. Frank

One workaround I use for multi-label classification is to sum the one-hot encoding along the row dimension.

For example, let’s assume there are 5 possible labels in a dataset and each item can have some subset of these labels (including all 5 labels). The code to one-hot encode an item’s labels would look like this:

labels = torch.LongTensor([1, 2, 4])
y_onehot = nn.functional.one_hot(labels, num_classes=5)
y_onehot = y_onehot.sum(dim=0).float()

Hope this helps!


Thanks! If was easier than I thought it would be.