How to calculate accuracy

hi, i have a semi - multi label problem

my specific problem is a bit different from a classic multi-label problem
i want to minimize my loss when the prediction is correct in only one class (or more)
i have a costume loss for my problem:

def new_loss_function(p, y, b):
    eps = 0.000000000000000000000000001
    losses = 0
    k = len(y[0])
    ones = torch.ones(1, 1).expand(b, k).cuda()
    loss1 = -((ones-y)*(((ones-p)+(eps)).log())).sum(dim=1)
    prod = (ones-y)*k - y*((p+ eps).log())
    loss2 = torch.min(prod, dim=1)[0]
    losses = (loss1 + loss2).sum()
    return losses / b

it means, if the model was right in one class:
label = [1,1,0,0,1]
predication = [1,0,0,0,0]
in my case this is a success

im not sure how to calculate the accuracy of the model in that case

Based on your description you could probably use:

if (prediction == label).any():
    nb_correct += 1

to calculate the number of correct samples and the accuracy by dividing it by the number of samples.

without threashold?
the predictions are between 0 and 1

In your first post you’ve posted the predictions as zeros and ones, so I assumed you’ve already applied a threshold. :wink:

If that’s not the case, you should use a threshold to get the predictions.
The simplest case would be 0. for logits and 0.5 for probabilities (after sigmoid).
The threshold can then be tuned using the ROC etc.

thank you for your answer!
i just wonder why after sigmoid?

If you are using a sigmoid activation for your model output, you could use the default threshold of 0.5.
On the other hand, if you are returning the raw logits, you could use 0.0.