First time Pytorch error

RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same

My code from the Pytorch’s tutorial. In particular, I followed the instruction to do that:

net = Net()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net.to(device)

And also, did this:
inputs, labels = inputs.to(device), labels.to(device)

The tutorial mentioned how to add GPU support for the example, but didn’t put it into the complete code example. So I guess the way I added the GPU was incorrect, as above. Thanks for help.

import torch.nn as nn**strong text**
import torch.nn.functional as F

import torchvision.transforms as transforms
import torchvision
import torch

import matplotlib

matplotlib.use("TkAgg")
import matplotlib.pyplot as plt
import numpy

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)

        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16 * 5 * 5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)

        return x

net = Net()

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net.to(device)

transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
print(transform)

trainSet = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
trainLoader = torch.utils.data.DataLoader(trainSet, batch_size=4, shuffle=True, num_workers=2)

testSet = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testLoader = torch.utils.data.DataLoader(testSet, batch_size=4, shuffle=False, num_workers=2)

classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

import torch.optim as optim

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

for epoch in range(2):
    running_loss = 0.0
    for i, data in enumerate(trainLoader, 0):
        inputs, labels = data
        inputs, labels = inputs.to(device), labels.to(device)

        optimizer.zero_grad()

        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()
        if i % 2000 == 1999:
            print('[%d, %5d] loss %.3f' % (epoch + 1, i + 1, running_loss / 2000))

print('Finished traning!')

def imshow(img):
    img = img / 2 + 0.5
    npimg = img.numpy()
    plt.imshow(numpy.transpose(npimg, (1, 2, 0)))
    plt.show()

dataIter = iter(trainLoader)
images, labels = dataIter.next()
# imshow(torchvision.utils.make_grid(images))

print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))

outputs = net(images)

_, predicted = torch.max(outputs, 1)

print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4)))
dataIter = iter(testLoader)
images, labels = dataIter.next()
# imshow(torchvision.utils.make_grid(images))

correct = 0
total = 0

with torch.no_grad():
    for data in testLoader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)

        correct += (predicted == labels).sum().item()

print("accuracy: %d %%", 100 * correct / total)

images and labels from your test dataset should be moved to device as well, i.e. images, labels = images.to(device), labels.to(device)

Same here

I may have missed something because of the formatting, but in general if your model is on one device (you did net.to(device)), you should move all inputs to that model to the same device.

@RicCu, how did you format so nicely?

Use backtics. One to open and one to close for inline formatting and three to open and three to close for block formatting: ` what you want formatted inline ` and
```

What you want block-formatted

```

Alternatively, use Ctr+Shift+C

Also, I need to mechanically add “.to(device)” in several places in the code. Is there a better way to do this? It seems awkward.

RicCu, after adding the ```, it only displays the original source text. Something wrong?

Yes. Unlike TensorFlow, PyTorch does not do automatic device placement, so you must manually place each tensor and/or module in the correct device via .to(device) or the device= argument in factory functions. This has the disadvantage of being more verbose and tedious, but the advantage is that when you get the hang of it, it allows you to very transparently control where you compute your operations.

Be sure to place both the opening and closing ``` on new lines without anything else. Anything in between them will be preformatted text

It’s indeed on the new lines, but still not expected. When I initially copied my code from my editor to this box (edit mode) in the forum, the code displayed in the left panel isn’t readable, full of format tags, as you can see now, but the text looks normal in the right panel. Is this normal?

After I added ``` to the left panel, both are not readable.

Yes, some differences are expected between the left and right panels; the right one displays the rendered markdown that you’ve written on the left one.
I see that adding your code as preformatted text broke and is displaying gibberish. I do not know why that might happen :confused: I’ve never experienced any issues like that. My guess is that the code you’re pasting already includes some tags that might confuse the editor.