# Compute the accuracy by one-hot prediction and one-hot label

Now there is:

• prediction_one_hot:
``````tensor([[0, 1],
[0, 1],
[1, 0],
[0, 1],
[0, 1],
[0, 1],
[0, 1],
[1, 0]], dtype=torch.uint8)
``````
• label_one_hot
``````tensor([[0, 1],
[1, 0],
[1, 0],
[0, 1],
[0, 1],
[0, 1],
[0, 1],
[1, 0]])
``````

Then how can I compute the accuracy?
This problem should not be difficult, but I have been thinking for a long time and can’t think of it. I thought there maybe a function for tensor in PyTorch can work, but I did not find that here.

Scikitlearn has a good implementation but it is for numpy.
Anyway you can code it yourself. The starting point

``````def classification_metric(pred_labels, true_labels):
pred_labels = torch.ByteTensor(pred_labels)
true_labels = torch.ByteTensor(true_labels)

assert 1 >= pred_labels.all() >= 0
assert 1 >= true_labels.all() >= 0

# True Positive (TP): we predict a label of 1 (positive), and the true label is 1.
TP = torch.sum((pred_labels == 1) & ((true_labels == 1)))

# True Negative (TN): we predict a label of 0 (negative), and the true label is 0.
TN = torch.sum((pred_labels == 0) & (true_labels == 0))

# False Positive (FP): we predict a label of 1 (positive), but the true label is 0.
FP = torch.sum((pred_labels == 1) & (true_labels == 0))

# False Negative (FN): we predict a label of 0 (negative), but the true label is 1.
FN = torch.sum((pred_labels == 0) & (true_labels == 1))
return (TP, TN, FP, FN)
``````
1 Like