loss.backward()
File "/usr/local/lib/python3.6/dist-packages/torch/tensor.py", line 166, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Function DivBackward0 returned an invalid gradient at index 1 - expected type torch.FloatTensor but got torch.cuda.FloatTensor
This occurs when executing loss.backward().
Any idea is appreciated. Thanks in advance!
The solution adressed above is absolutely correct. Some of the tensors are at gpu and some are at cpu.
As suggested by the above solutions as you can move the tensor to gpu by .cuda() call, similarly you can move the tensor/model on the cpu by .cpu() call. It will come handy.
Either you can use the solution above, but defining the device on which the neural networks and tensors is the right and more informative approch. Define the device by :-
Device = torch.device(‘cuda’ if torch.cuda.is_available() else ‘cpu’)
torch.cuda.is_available() checks if cuda(or gpu) is available to you or not
Now I can already see that you have cuda on your system, but still its an additive line to check which device you can have while working with pytorch.
Then the model can be moved on the said device as :
model = model.to(Device)
and the tensors(t) can be put on the device the same way :
t = t.to(Device)
Hope this helps, this is just for further info.
Thanks
Good to hear it’s working.
However, note that you are creating a non-leaf tensor with the cuda call as described by @albanDhere, so you would probably want to create the tensor as:
I have used this, but the error here is strange. Eventhough using the above statement (t.to(device)) some of the tensors are on gpu and others are on cpu. It is not showing any error during forward propagation. The error is occurring during backprop. I need some clarification why it is not throwing any error during forward prop. TIA.