Accuracy results

Epoch: [1][0/25] Loss 0.5619 (0.5619)Prec@1 73.438 (73.438) Prec@5 100.000 (100.000)

Epoch: [1][20/25] Loss 0.5804 (0.7834)Prec@1 73.438 (67.113) Prec@5 100.000 (100.000)

I have two classes and prec 5 always 100 is it okay or I have a problem ?

I don’t think a Prec@5 metric would make sense, since it would return:

(number of relevant classes in top 5 predictions) / (number of relevant classes)

Since you are working with two classes, the “relevant” class would always be in the “top5” predictions.

1 Like

should I use another accuracy metric or it’s okay if I used top1 & top 5 in my case

I think the plain accuracy calculation would be sufficient for a binary classification use case and you could remove the Prec@5 one.

1 Like