NLP. RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Hi I have the following code: https://gist.github.com/vorlkets/f38c91aa52cb4e68976e1f638d251a85. When run, I get the following error:

python hw2main.py
Using device: cpu

Traceback (most recent call last):
File “hw2main.py”, line 142, in
main()
File “hw2main.py”, line 76, in main
loss.backward()
File “C:\Users\Cliff\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\tensor.py”, line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “C:\Users\Cliff\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\autograd_init_.py”, line 97, in backward
Variable._execution_engine.run_backward(
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

What does the error tell me? and how to fix this?

I haven’t checked the complete code, as it’s quite long, but if you are using your custom loss definition note that the operations are not differentiable and will thus detach the loss value from the computation graph:

x = torch.randn(10, requires_grad=True)
ratingOutput = torch.round(x).long()
categoryOutput = torch.argmax(x)
print(ratingOutput.grad_fn)
> None
print(categoryOutput.grad_fn)
> None

As you can see, the .grad_fn are None after the operations used in your loss() definition.