Multi Label Classification in pytorch

That should depend on your label type. If you do mutlilabel classification (with multiple singular-valued class indices as result) I would recommend to calculate an accuracy/F1 score per class. If you do for example multilabel segmentation I would also recommend a per-class evaluation for example evaluating each segmentation map with dice coefficient or something similar.

Evaluating each class on it’s own also has the advantage that it is easier traceable if your model does perform bad for only a single class.

2 Likes

Hi,

Could anyone give me an example of how it would look like a ResNet or another network to use BCELoss for multilabel classification? I’m using the MultiLabelSoftMarginLoss function, but the accuracy is getting very low.

Here is a very simple dummy example:

model = nn.Linear(20, 5) # predict logits for 5 classes
x = torch.randn(1, 20)
y = torch.tensor([[1., 0., 1., 0., 0.]]) # get classA and classC as active

criterion = nn.BCEWithLogitsLoss()
optimizer = optim.SGD(model.parameters(), lr=1e-1)

for epoch in range(20):
    optimizer.zero_grad()
    output = model(x)
    loss = criterion(output, y)
    loss.backward()
    optimizer.step()
    print('Loss: {:.3f}'.format(loss.item()))
40 Likes

Anyone knows that if number of classes, say up to one million labels, is huge how to do multi label classification?

Thank you.

This is somehow the case in NLP. You could encode your classes with an embedding and than train a regression network to predict embedded values.

good idea, but could you offer more details about it? or any resources about your method? Since the task input is image, and output are tags for this image, I have no idea how to use embedding to do multi-label classification.

1 Like

Thank you, @ptrblck. I’m testing here :slight_smile:

Here’s some slides on evaluation.

The metrics can be very easily implemented in python.

Thanks, @SpandanMadan. Do you have any examples of how to calculate precision for multilabel in python using hamming distance?

1 Like

Not sure how hamming distance can be used in measuring classification?

In multiclass, any sample belongs to only one class. In multilabel, they can simultaneously belong to multiple classes. However, a sample can either be completely present in a class, or not. That’s what makes classification a discrete problem in the output variable.

How would you incorporate hamming distance into this? Can you explain in detail your problem to give more context?

Multi-Label classification problems can be solved by using pytorch. I successfully have done (e.g. 10 species monkey classification ).

My problem is image retrieval. I use a deep neural network to generate hashing codes for images. In the retrieval phase I calculate the hamming distance as similarity between the images.

I was wondering how’d you managed to set up the dataloader for the multi-label?

Do you know why now? I have the problem has you! :scream:

Hi
I wan to use the multi label classification but in my project the order of classes is crucial. For example if it predicts class number 4 and 10, it is not equal to class 10 and 4. I don’t know how to use BCE because both of the classes should be 1, would you please advice regarding my issue. what would be my loss function, is there any loss function which I can use in my problem?
thank you so much,

Hamming distance doesn’t seem like a good metric to be honest. Hamming distance treats every position of the string equally, are you sure you want to do that?

Can you give some more context please. What’s the input, output, post examples of them?

i have two categories of image to classify…everything is cool…but the only problem is that
how do i get the image from the label …my NN is giving me the label …like 0 and 1
how do i know 0 is for which image and 1 is for which …I use the datasets.ImageLoader and then i used the DataLoader for getting the batches…
plesehelp

In your example, there is no batch. How to add batch training in your example, what is more, how to calculate loss and hamming loss or f1 score.?

You can just increase the samples in the batch dimension, if you want to use more than a single sample:

model = nn.Linear(20, 5) # predict logits for 5 classes
x = torch.randn(2, 20)
y = torch.tensor([[1., 0., 1., 0., 0.],
                  [1., 0., 1., 0., 0.]]) # get classA and classC as active

To calculate metrics you could use e.g. sklearn.metrics.

Did you figure it out?