Can't get pytorch to use my 2070 GPU

Hi @Paze!
I think you might actually BE training on your GPU, but because you have your batch size set to 1, your algorithm is spending most of its time passing data to and from the GPU memory instead of actually doing the calculations. Have you tried a larger batch size?

If you’d like to see what’s taking up all of your time, you can use torch.utils.bottleneck.
https://pytorch.org/docs/1.5.1/bottleneck.html?highlight=torch%20utils%20bottleneck

Other than that, it looks like you aren’t using “with torch.no_grad():” in your evaluation routine. I’m pretty sure that you should be wrapping any inference related calls to your model/loss function inside such a statement.

Hope this helps. Good luck!
–SEH

EDIT: Here’s another link I found extremely useful when facing a similar issue.