Strange error from loss.backward()

I got this error when doing the backward pass:

loss.backward()
  File "/usr/local/lib/python3.6/dist-packages/torch/tensor.py", line 166, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py", line 99, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: Function DivBackward0 returned an invalid gradient at index 1 - expected type torch.FloatTensor but got torch.cuda.FloatTensor

This occurs when executing loss.backward().
Any idea is appreciated. Thanks in advance!

1 Like

Hi,

This is quite unexpected. Do you have custom autograd.Function that you defined? Do you use hooks to change some gradients during the backward?

I was able to solve the error. Some of the tensors are placed on cpu others on gpu. To solve the issue I have moved all the tensors to gpu.

torch.tensor(1, dtype=torch.float, requires_grad=True) 
#changed to 
torch.tensor(1, dtype=torch.float, requires_grad=True).cuda()

Hi there,

The solution adressed above is absolutely correct. Some of the tensors are at gpu and some are at cpu.

As suggested by the above solutions as you can move the tensor to gpu by .cuda() call, similarly you can move the tensor/model on the cpu by .cpu() call. It will come handy.

Either you can use the solution above, but defining the device on which the neural networks and tensors is the right and more informative approch. Define the device by :-
Device = torch.device(‘cuda’ if torch.cuda.is_available() else ‘cpu’)

torch.cuda.is_available() checks if cuda(or gpu) is available to you or not

Now I can already see that you have cuda on your system, but still its an additive line to check which device you can have while working with pytorch.

Then the model can be moved on the said device as :
model = model.to(Device)

and the tensors(t) can be put on the device the same way :
t = t.to(Device)

Hope this helps, this is just for further info.
Thanks

Good to hear it’s working.
However, note that you are creating a non-leaf tensor with the cuda call as described by @albanD here, so you would probably want to create the tensor as:

torch.tensor(1, dtype=torch.float, requires_grad=True, device='cuda:0')

I have used this, but the error here is strange. Eventhough using the above statement (t.to(device)) some of the tensors are on gpu and others are on cpu. It is not showing any error during forward propagation. The error is occurring during backprop. I need some clarification why it is not throwing any error during forward prop. TIA.

Could you post a minimal code snippet to reproduce this error?