Autograd works only if you perform all operations to Variables, so it knows what has changed.
In your code, midvar.data[:,0] modifies the tensor wrapped inside a Variable directly. Instead of doing that,
modifying the Variable with the same operation lets autograd compute the gradients correct:
Hi, @richard
Autograd works correctly with the code you supplied. However, runtime error still occurs with the code below, which is part of the forward function of my custom module.
Traceback (most recent call last):
File "/home/zhangyi/pytorch-ws/test_PGNNet.py", line 31, in <module>
main()
File "/home/zhangyi/pytorch-ws/test_PGNNet.py", line 27, in main
loss.backward()
File "/home/zhangyi/miniconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 148, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/home/zhangyi/miniconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward
variables, grad_variables, retain_graph)
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
The reason for error should be very clear from the error message. You are modifying a Variable that requires gradient. If you modify it inplace, then PyTorch cannot check the input and output, and thus cannot compute gradient.
It is different from the above case. In this case, a value is overwritten on a variable that is not needed elsewhere to compute the gradient. But in your above code, message_weight must have been used somewhere in some backward function to compute gradient. Hence the error.
I am very new to Torch so please accept my novice questions
I am getting the same error on the following:
loss_l, loss_c = criterion(out, targets)
loss = loss_l + loss_c
loss.backward()
Any advice about what I can do? I will really appreciate.