# Multi-Label classification: loss is low but accuracy is low

I try to fine-tune the resnet152 for multi-label classification where the number of labels is 1024.

``````   multiLabelLoss = th.nn.MultiLabelSoftMarginLoss()
predict = resnet(img)
loss = multiLabelLoss(predict, label)
``````

where label is `batch_size x n_category` and so is `predict`

The loss decreases quite well from 0.7 to 0.04. However, the accuracy on the training set is very low about 0.09%.

I compute the accuracy using the hamming loss and implemented as

`````` predict = th.nn.functional.sigmoid(predict) > 0.5
r = (predict == label.byte())
acc = r.sum().data[0]
acc = float(acc) / 1024
``````

How could this happen? Is there any mistake using the `MultiLabelSoftMarginLoss` ?

1 Like

what dou u mean by doing sigmoid, u only have two categories?
I see u said the number of labels is 1024, I don’t know if your categories are 1024, if so, u need to use softmax instead of sigmoid.
In other words, what is predict.size()

well. predict.size() is `batch_size x n_category`. Here I am working for a multilabel classification where each image can have multiple labels. I take it as a `n_category` binary classification tasks.

sorry, I don’t know how to do it. maybe someone others can help u.

can you check if this line is doing the right thing?

``````r = (predict == label.byte())
``````

just a quick manual verification…

Aha. This line is correct. But not the next line. I change to `acc = r.float().sum().data[0]` the result is correct now. It seems that the summation of a byte tensor will lead to errors.