Test accuracy with different batch sizes

Have you fixed your varience for different batch size

@ Annus_Zulfiqar I encountered a problem similar to yours. I set model.eval (), but my model got different results under different batchsize acc. Even I shuffle my dataset with the same batchsize. The results are different. I think this is the reason why the model did not enter eval mode but I don’t know why I got such a result. Have you solved this problem now?

Honestly it has been two years since I encountered this problem when I was working on a deep learning project using Pytorch. I don’t quite remember the work around for this but I know the issue still persisted at the end. You will probably have to read some other answers to similar questions.

This is what I did that time and I quote:

“Thank you so much everyone for your help. Steve_cruz helped me solve my error. I retrained my model be removing the last softmax layer since cross entropy loss applies softmax itself. Also I decorated my evaluation function with torch.no_grad() and got the model running. Now it is giving me good accuracy with any batch size. But still those accuracies vary somewhere between 2-3% (93-95%) for different batch sizes for some reason. I’ll try to find some fix for that. Thanks for your time everyone!”