.to(device) not working properly

I’m trying to run the following code and run a network on Cuda. However, it seems I’m running into problem as the error: RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same on the line out = net(p).

Here is the code:

import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
class Net(nn.Module):
    def __init__(self, input_channel=1, output_channel=1, num_filters=64, net_length=17):
        super(Net, self).__init__()
        self.net_length = net_length
        self.convs = []
        self.batch_norms = []
        self.convs.append(nn.Conv2d(in_channels=input_channel, out_channels=num_filters, kernel_size=3, padding=1))
        self.batch_norms.append(0)
        for i in range(net_length-2):
            self.convs.append(nn.Conv2d(num_filters, num_filters, 3, padding=1))
            self.batch_norms.append(torch.nn.BatchNorm2d(num_features = num_filters))
        self.convs.append(nn.Conv2d(num_filters, output_channel, 3, padding=1))
        #print(self.convs)
        
    def forward(self, x):
        #print(x.size())
        x = F.relu(self.convs[0](x))
        #print(x.size())
        for i in range(1,self.net_length-2):
            x = F.relu(self.batch_norms[i](self.convs[i](x)))
            #print(x.size())
        x = self.convs[self.net_length-1](x)
        #print(x.size())
        
        return x
net = Net()
net.to(device)

print(net)

p = torch.as_tensor(np.random.rand(1,1,32,32).astype(np.float32)).to(device)
out = net(p)

I’m not sure, what is wrong with my code. I think declaring all the weights in a list caused the problem. Any remedy to this issue?

Your modules won’t be properly registered, if you append them in a plain Python list.
Use nn.ModuleList instead and the model.to() call should push all parameters to the device.

4 Likes