Expected object of device type cuda but got device type cpu for argument #2 'mat1' in call to _th_addmm

Hi,
I 've been trying to utilize cuda to train my neural network on colab but I’m having some trouble.
Even though I set all tensor to use cuda(), it’s still saying it’s getting cpu for device.
Here’s the error stack.

RuntimeError                              Traceback (most recent call last)
<ipython-input-21-fc6b4297452e> in <module>()
    221 perturbations = int(args.ptb_rate * (pure_adj.sum()//2))
    222 #test(adj, model)
--> 223 test(adj, model)
    224 #print(torch.cuda.device_count())
    225 

6 frames
<ipython-input-21-fc6b4297452e> in test(adj, model)
    208     if source_domain == target_domain and train_mode == True:
    209         for epoch in range(args.epochs):
--> 210             train(epoch, adj, model, optimizer)
    211         print("Optimization Finished!")
    212         print("Total time elapsed: {:.4f}s".format(time.time() - t_total))

<ipython-input-21-fc6b4297452e> in train(epoch, adj, model, optimizer)
    145     model.train()
    146     optimizer.zero_grad()
--> 147     output, ctx = model(features, adj, nums)
    148 
    149     loss_train = F.nll_loss(output[idx_train], labels[idx_train])    #get loss for predicted log-likelyfood on train data

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

/content/GCN_models.py in forward(self, inputx, adj, nums)
     32 #                x[it] = self.linear_p(torch.FloatTensor(k))
     33 #            else:
---> 34 #                x[it] = self.linear_u(torch.FloatTensor(k))
     35         # review
     36         t1 = torch.FloatTensor(inputx[:nums[0][0]]).cuda()

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/linear.py in forward(self, input)
     85 
     86     def forward(self, input):
---> 87         return F.linear(input, self.weight, self.bias)
     88 
     89     def extra_repr(self):

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
   1608     if input.dim() == 2 and bias is not None:
   1609         # fused op is marginally faster
-> 1610         ret = torch.addmm(bias, input, weight.t())
   1611     else:
   1612         output = input.matmul(weight.t())

RuntimeError: Expected object of device type cuda but got device type cpu for argument #2 'mat1' in call to _th_addmm

In my code I set

if has_cuda:
    model.cuda()
    adj = adj.cuda()
    labels = labels.cuda()
    idx_train = idx_train.cuda()
    idx_val = idx_val.cuda()
    idx_test = idx_test.cuda()

Also input is set to cuda in nn layer.

Hi @Kai_Eiji
I am guessing that there is some operation in your code that uses variable(in cpu) and tensor(in gpu). You’ll need to check the code again.

Maybe features and/or nums has to be put on the gpu as well?