Pytorch won't run on GPU

I have installed Pytorch via conda using the following command

conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch

I have a GTX 1050 GPU and the latest drivers installed on a Windows 10 laptop. All I’m trying to do is train a simple neural network on the GPU. But, no matter what I do, the training is executed on the CPU. The GPU is not utilized at all. Following is the code I’m using,

import numpy as np
import torch
import torchvision
from torchvision import datasets, transforms
from torch import nn, optim

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
dtype = torch.cuda.FloatTensor
transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,), (0.5,)),])
mnist_data = torchvision.datasets.MNIST('D:\python_workspace',train=True, transform=transform, download=True)
train_data = torch.utils.data.DataLoader(mnist_data,batch_size=100,shuffle=True)

input_size = 784
hidden_size = 30
output_size = 10

model = nn.Sequential(nn.Linear(input_size, hidden_size),
                      nn.Sigmoid(),
                      nn.Linear(hidden_size, output_size),
                      nn.Sigmoid())

model = model.to(device)

criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=3, momentum=0.9)

epochs = 15
for e in range(epochs):
    running_loss = 0
    for images, labels in train_data:
        images = images.to(device)
        labels = labels.to(device)

        labels = (torch.nn.functional.one_hot(labels)).float()
        images = images.view(images.shape[0], -1)

        optimizer.zero_grad()
        output = model(images)

        loss = criterion(output, labels)
        loss.backward()

        optimizer.step()
        running_loss += loss.item()
    else:
        print("Epoch {} - Training loss: {}".format(e, running_loss/len(train_data)))

This code does not run on the GPU, why? FYI, print(device) gives device(type='cuda', index=0)

Any ideas?

I believe torch.device() expects a type, what you want is

device = torch.device(“cuda” if torch.cuda.is_available() else “cpu”)

your tensor will be routed to the CUDA current device.

I tried changing “cuda:0” to “cuda” as you suggested, nothing changed.

You can run this test to confirm if you are utilising the GPU

Start training your model (run python script), then in a CMD prompt window run command below. It will list every 5 seconds process using the GPU

nvidia-smi.exe -l 5

I monitored GPU usage via nvidia-smi. I also increased the network’s size. It turns out that the network was too small to be fully utilized by GPU. Increasing it’s size increased GPU usage.

1 Like