# Calculate accuracy

Hi, I am a bit confused about the accuracy on multi label classification.
I have this function for calculating accuracy:

``````def calculate_accuracy(output, target):
"Calculates accuracy"
output = output.data.max(dim=1,keepdim=True)
output = output == 1.0
output = torch.flatten(output)
target = target == 1.0
target = torch.flatten(target)
``````

but when I run it, it gives wrong outputs. For example:

``````x = torch.tensor([,,,,,])
y = torch.tensor([,,,,,])
``````

calculate_accuracy(x,y) prints 0.6667… where it clearly should print 0.333.
Where have I mess up ?

@gkrisp9

The mistake that you have made is with these two lines

``````output = output == 1.0
target = target == 1.0
``````

This is translating the data into

``````tensor([False, False, False, False, False, False])
tensor([False,  True,  True, False, False, False])
``````

Due to this there is a match on 66% of the data and that is what is coming up in your results

1 Like

You are right! Thanks

Now that I see it again shouldn’t that print 33% ? There are 2 ‘True’ over 6 ‘False’

@gkrisp9
You are not just evaluating True against True. You are also evaluating False against False. Even that is a valid match as per your logic

``````tensor([False, False, False, False, False, False])
tensor([False,  True,  True, False, False, False])
``````

You have 4 matches of False against False and 0 matches of True against True
Final score 4/6

1 Like

Oh I missed that. Now I got it, thanks a lot again!