Test accuracy with different batch sizes

Thank you so much everyone for your help. Steve_cruz helped me solve my error. I retrained my model be removing the last softmax layer since cross entropy loss applies softmax itself. Also I decorated my evaluation function with torch.no_grad() and got the model running. Now it is giving me good accuracy with any batch size. But still those accuracies vary somewhere between 2-3% (93-95%) for different batch sizes for some reason. I’ll try to find some fix for that. Thanks for your time everyone!

1 Like