Model accuracy calculator

My model outputs a tensor of values between 0 and 1 and the ground truth is one hot encoded. How to calculate the accuracy using these 2 tensors.

for X_train_batch, y_train_batch in train_loader:

    X_train_batch, y_train_batch =,



    y_train_pred = model(X_train_batch)


    train_loss = criterion(y_train_pred, y_train_batch)

    train_acc = multi_acc(y_train_pred, y_train_batch)





    train_epoch_loss += train_loss.item()

    train_epoch_acc += train_acc.item()

torch.max(y_pred,dim=1)[1] will give the values. torch.max with dim=1 key will return indices and values. From your use case, you want the values of the max tensor.

y_pred_tags gives me the value right, also from y_test i have to take the value and use it in the correct_pred equation?

I got the y_pred as [0.234, 0.0786,0.10…0.50] and the y_test as [ 0,0,1,0…0] while training. Now how can I use these values to calculate the model training accuracy.

Hi, sorry that I did not read your code properly. _, y_pred_tags = torch.max(y_pred, dim = 1) does give the indices to you already.

From reading again, I am not too sure what’s the problem with your code, it seems good to me. Are u getting any errors from your run?

I don’t see any errors but i’m getting 0 as the accuracy. Here is the code I use for accuracy

def multi_acc(y_pred, y_test):

_, y_pred_tags = torch.max(y_pred, dim = 1)  

_, y_test_tag= torch.max(y_test, dim = 1)

correct_pred = (y_pred_tags == y_test_tag).float()

acc = correct_pred.sum() / len(correct_pred)

acc = torch.round(acc * 100)

return acc

Hi, @thunder, I don’t think torch.max is suitable to your case.
I think torch.argmax is a good choice.

The ground truth is already one hot encoded (as you mentioned), you do not have to use torch.max for it. Following your code, y_test does not have to go through torch.max.

_,y_pred_tags = torch.max(y_pred,dim=1) is correct in my opinion (I use it too) as it gives the indices of the max output which is the position of the label (one-hot).

You can print y_pred_tags and compare it to the ground truth (y_train_batch) to get a clearer picture, .shape to also ensure that you are on the right track.

I tried this way:
This is my accuracy function, below is an example I got

def multi_acc(y_pred, y_test):

_, y_pred_tags = torch.max(y_pred, dim = 1) 

correct_pred = (y_pred_tags == y_test)

acc = correct_pred.sum().float() / float(y_test.size(0))    

acc = torch.round(acc * 100)    

return acc

The sample values are,

This is y_train_pred::
tensor([[2.8131e-11, 6.4750e-18, 3.4827e-11, 1.6649e-14, 1.2475e-18, 6.9461e-11,
9.9766e-01, 5.3941e-11, 3.6331e-04, 1.9774e-03, 1.5335e-16, 5.1920e-07,
4.3097e-23, 4.7971e-07, 5.6579e-11, 1.7530e-09, 1.9532e-07, 6.3639e-07,
1.3673e-20, 1.2954e-18, 2.0057e-15, 1.0786e-14, 1.1804e-07, 1.2188e-15]],
y_train_batch, also the y_test for the accuracy function
tensor([[0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])

and the correct_pred value from the accuracy function.
tensor([[False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False, False, False, False]])

I don’t understand where exactly the problem, I just ran for 10 epochs and got accuracy as 0. Not sure why

Based on the shape of y_train_pred it seems you are working on a multi-label classification?
If so, you shouldn’t use torch.argmax, as it would be used for a multi-class classification and would need to apply a threshold.
E.g. if the outputs are probabilities for each label and indicate, if this class index is active, you could use pred = output > 0.5.

In that case could you explain how the output tensor corresponds to y_test?
y_train_pred has a shape of [1, 24] and seems to contain probabilities. Based on this I assume you are working with a single sample and 24 classes.
Is y_test the one-hot encoded target tensor corresponding to this output?
If so, then the prediction for this particular sample is correct:

torch.argmax(y, dim=1) == torch.argmax(x, dim=1)
> tensor([True])

and you shouldn’t get a 0 accuracy.

Yes, I have a sample of 24 classes and y_train_pred is of shape [1,24], also the y_test is of same size which is one hot encoded tensor. How can i measure the accuracy with this.

My code snippet shows how these two tensors can be compared and you could calculate the mean afterwards. For the single sample you would get an accuracy of 100%, since it contains the right prediction.
If you want to calculate the accuracy for the entire validation dataset, you could sum the correctly classified samples and divide by the number of samples afterwards (outside of the validation loop).

Thank you so much, it worked