Help me with the reasoning because I am at the moment a bit lost since I saw many different approaches.

I need to calculate accuracy the best possible way, let’s use some of the predefined datasets in PyTorch like cifar10, cifar100, … so single label classification problem.

I saw many different approaches. What is the best approach.

approach 1 (keep track of correct and total labels on cpu)

I think this approach is not optimal because you need to convert to cpu to some numpy array possible the number of correct examples. Why not keeping it on gpu.

approach 2 (using the formula for a single batch)

It may be just the correct labels we need. The totals we may get based on the batch enumerator and batch size.

This can work for all full dataset if we concatentate preds and true arrays.
I also think this is a wrong approach for cifar10, or ciffar100 because just keeping the single number of correct is enough.

Thanks @KaiHoo, just keeping correct and total labels on CPU is OK, but you do it for all batches like that. On the other hand the formula in approach 2 I saw in here and I slightly modified it. It is good if you need per batch accuracy, but more important is to have per epoch accuracy. Is there any nice example for cifar10 that uses nice accuracy metrics practice, I would be happy if someone share.

For the epoch accuracy do you just average the per batch accuracy with drop_last=False or you keep the correct count?