Error when implementing custom loss: RuntimeError: tensors are on different GPUs

Hi,

I am implementing backward of a custom loss, when I run the program, it shows error like:

RuntimeError: tensors are on different GPUs

Actually, I only have one GPU.

Code ls like this:

def backward(self, grad_output):

    targets = self.targets_one_hot # we need binary for targets
    intersects, unions = self.intersect, self.union
    print 'targets size: ',targets.size(),'unions size: ',unions.size(),'intersects size: ',intersects.size()
    for i in range(0,self.numOfCategories):
        input = self.inputs[:,i,...]
         ...
        pred = torch.mul(input, IoU2) #input[:,1] is equal to input[:,1,...]
        print 'gt size: ',gt.size(),' pred size: ',pred.size()
        dDice = torch.add(torch.mul(gt, 2), torch.mul(pred, -4))
        if i==0:
            prev = torch.mul(dDice, -grad_output[0])
        else:
            curr = torch.mul(dDice, grad_output[0])
            grad_input = torch.cat((prev,curr), 0)
            prev = curr
            

    return grad_input , None 

Can anyone give some suggestions?

Are you mixing CPU and CUDA tensors?

This maybe the problem. As Variable doesn’t support some operations, I convert them into tensor and then do operations on the tensors. Can you please have a glance at the code I pasted?

Thanks.

Solved.

I make all the tensor to GPU using sth like torch.cuda.FloatTensor(sz)…

Thanks @colesbury