Accuracy drops to 10% on GPU

Hello All,

I tried running the program in this page on my GPU and the Accuracy of the network on the 10000 test images falls from 56% (on CPU) to 10%. I know that I should be getting the same scores irrespective of whether it is a CPU or GPU. Could someone let me know what could have gone wrong? Posting the entire code would be lengthy and I’ll post whatever relevant sections you ask for.

The code you shared runs on GPU by default. What do you change so that it runs on CPU?
I suspect that your changes to the code causes this problem.

Can you share us all the changes you made?

The code I shared actually runs on the CPU by default. This section provides the code for running it on a GPU.

Here is the code. It would be great if you could let me know the errors I’ve made. FYI, I ran this code on my system and not in Colab. I just uploaded the notebook over there for sharing it.

Your evaluation code seems wrong to me. You load your test data as inputs, but you feed forward images to your model. i.e. You keep giving the model the same cat, ship, ship, plane images, but you are evaluating it with different labels obtained from the testloader.

Changing images to inputs should be able to solve your problem.

correct = 0
total = 0
with torch.no_grad():
    for data in testloader:
#         images, labels = data
        inputs, labels = data[0].to(device), data[1].to(device) # You load the test images as `inputs`
        outputs = net(images.to(device)) # You feedforward `images`
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
    100 * correct / total))

Thank you very much. I just blindly replaced the code without much checking.