Cannot transfer data and model to gpu

I have a deep but simple model (bunch of convs, relu, bn) and data that i feed it. I’ve moved the weights with model.cuda() and the data with .to(device) but it fails and tells me that input is torch.cuda.FloatTensor and weights are torch.FloatTensor. Which is very wierd.

When I don’t move data with .to(device) then it says that input is torch.FloatTensor and weights are torch.cuda.FloatTensor.

If I place .to within dataset class getitem then I gett multiprocessing error for cuda. Does anyone know why mysteriously when I set both model.cuda() and data.to(device) it doesnt work?

    dataset = SomeDataset()
    dataloader = DataLoader(dataset, batch_size=10, num_workers=1)

    model = Model3000()
    model.cuda()
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    model = model.to(device)
    print("Using device: ", device) # this displays cuda:0

    n_iters = 100
    for i in range(1, n_iters):
        for batch_ndx, sample in enumerate(tqdm(dataloader, desc=f"Epoch {i}: ")):
            data = sample[0]
            iz = model(data)

Could you post your model definition please?
If you have written a custom model, you can make sure that all parameters and submodules are properly registered, by checking the model.parameters() and see if all parameters are returned.

1 Like

@ptrblck Thanks for reply!

Now that I have inspected the model I wonder if there is a problem how I initialize some layers Namely b1, b2, b3, b4,b5.

class JModel(nn.Module):
    def __init__(self, b=1, r=5):
        super(Jasper, self).__init__()

        self.conv1 = ConvProc(64, 256, 11, 0.2)
        self.b1 = [Block(inc, outc, 11, 0.2, r) for inc, outc in get_sizes(256, 256, b)]
        self.b2 = [Block(inc, outc, 13, 0.2, r) for inc, outc, in get_sizes(256, 384, b)]
        self.b3 = [Block(inc, outc, 17, 0.3, r) for inc, outc, in get_sizes(384, 512, b)]
        self.b4 = [Block(inc, outc, 21, 0.3, r) for inc, outc, in get_sizes(512, 640, b)]
        self.b5 = [Block(inc, outc, 25, 0.3, r) for inc, outc, in get_sizes(640, 768, b)]
        self.blocks = self.b1 + self.b2 + self.b3 + self.b4 + self.b5
        self.conv2 = ConvProc(768, 896, 29, 0.4, dilation=2)
        self.conv3 = ConvProc(896, 1024, 29, 0.4)
        self.conv4 = nn.Conv1d(1024, 28, 1)

    def forward(self, x):
        feats = self.conv1(x)
        for block in self.blocks:
            feats = block(feats)
        feats = self.conv2(feats)
        feats = self.conv3(feats)
        feats = self.conv4(feats)
        return feats

EDIT: now that I’ve looked it up it seems I need nn.ModuleList?
EDIT2: It works with nn.ModuleList

Thanks for pointing in the right direction!

1 Like

Thanks for the hint. I needed to use nn.ModuleList as I define a list of layers in my class implementation which was not transferring to GPU (i.e. they were not listed in model.parameters()).