Strange error from loss.backward()

I got this error when doing the backward pass:

  File "/usr/local/lib/python3.6/dist-packages/torch/", line 166, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/usr/local/lib/python3.6/dist-packages/torch/autograd/", line 99, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: Function DivBackward0 returned an invalid gradient at index 1 - expected type torch.FloatTensor but got torch.cuda.FloatTensor

This occurs when executing loss.backward().
Any idea is appreciated. Thanks in advance!

1 Like


This is quite unexpected. Do you have custom autograd.Function that you defined? Do you use hooks to change some gradients during the backward?

I was able to solve the error. Some of the tensors are placed on cpu others on gpu. To solve the issue I have moved all the tensors to gpu.

torch.tensor(1, dtype=torch.float, requires_grad=True) 
#changed to 
torch.tensor(1, dtype=torch.float, requires_grad=True).cuda()

Hi there,

The solution adressed above is absolutely correct. Some of the tensors are at gpu and some are at cpu.

As suggested by the above solutions as you can move the tensor to gpu by .cuda() call, similarly you can move the tensor/model on the cpu by .cpu() call. It will come handy.

Either you can use the solution above, but defining the device on which the neural networks and tensors is the right and more informative approch. Define the device by :-
Device = torch.device(‘cuda’ if torch.cuda.is_available() else ‘cpu’)

torch.cuda.is_available() checks if cuda(or gpu) is available to you or not

Now I can already see that you have cuda on your system, but still its an additive line to check which device you can have while working with pytorch.

Then the model can be moved on the said device as :
model =

and the tensors(t) can be put on the device the same way :
t =

Hope this helps, this is just for further info.

Good to hear it’s working.
However, note that you are creating a non-leaf tensor with the cuda call as described by @albanD here, so you would probably want to create the tensor as:

torch.tensor(1, dtype=torch.float, requires_grad=True, device='cuda:0')

I have used this, but the error here is strange. Eventhough using the above statement ( some of the tensors are on gpu and others are on cpu. It is not showing any error during forward propagation. The error is occurring during backprop. I need some clarification why it is not throwing any error during forward prop. TIA.

Could you post a minimal code snippet to reproduce this error?