Autograd for in-place setting value


(Zhang Yi) #1

Hi, dose autograd support in-place setting value?

from torch.autograd import Variable

invar = Variable(torch.rand(5, 5), requires_grad=True)
midvar = invar + 1
midvar.data[:, 0] = 0
loss = midvar.sum()
loss.backward()
print(invar.grad.data)

the output is

 1  1  1  1  1
 1  1  1  1  1
 1  1  1  1  1
 1  1  1  1  1
 1  1  1  1  1

I hope the grad of invar should be

 0  1  1  1  1
 0  1  1  1  1
 0  1  1  1  1
 0  1  1  1  1
 0  1  1  1  1

If it is not supported, is there any way to implement this function? Encapsulate it as a torch.autograd.Function ?


#2

Autograd works only if you perform all operations to Variables, so it knows what has changed.
In your code, midvar.data[:,0] modifies the tensor wrapped inside a Variable directly. Instead of doing that,
modifying the Variable with the same operation lets autograd compute the gradients correct:

from torch.autograd import Variable

invar = Variable(torch.rand(5, 5), requires_grad=True)
midvar = invar + 1
midvar[:, 0] = 0
loss = midvar.sum()
loss.backward()
print(invar.grad.data)

(Zhang Yi) #3

Thanks~
I used to implement it with midvar[:, 0] = 0 and encounter a bug as below, however I can not reproduce it… Maybe i just made something wrong.

RuntimeError: a leaf Variable that requires grad has been used in an in-place operation

Anyway now your solution works well. :smiley:


(Zhang Yi) #4

Hi, @richard
Autograd works correctly with the code you supplied. However, runtime error still occurs with the code below, which is part of the forward function of my custom module.

message_weight = torch.sigmoid(fc_self_out_reshape + fc_neig_out_reshape + fc_pointcloud_out_relative) 
message_weight[:, 0, :, :] = 0.5
message_weight = message_weight + 10

The error is:

Traceback (most recent call last):
  File "/home/zhangyi/pytorch-ws/test_PGNNet.py", line 31, in <module>
    main()
  File "/home/zhangyi/pytorch-ws/test_PGNNet.py", line 27, in main
    loss.backward()
  File "/home/zhangyi/miniconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 148, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
  File "/home/zhangyi/miniconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward
    variables, grad_variables, retain_graph)
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

If I comment out the line

message_weight[:, 0, :, :] = 0.5

autograd works correctly. Very strange…

Is there any other reason for this runtime error?


(Zhang Yi) #5

I find that add a line before

message_weight[:, 0, :, :] = 0.5

such that

message_weight = message_weight.clone()
message_weight[:, 0, :, :] = 0.5

autograd works.

I am wondering why clone is not necessary in the code you supplied.


(Simon Wang) #6

The reason for error should be very clear from the error message. You are modifying a Variable that requires gradient. If you modify it inplace, then PyTorch cannot check the input and output, and thus cannot compute gradient.


(Zhang Yi) #7

But what about this code snippet?

from torch.autograd import Variable

invar = Variable(torch.rand(5, 5), requires_grad=True)
midvar = invar + 1
midvar[:, 0] = 0
loss = midvar.sum()
loss.backward()
print(invar.grad.data)

The variable midvar is modified inplace.

midvar[:, 0] = 0

But autograd works.


(Simon Wang) #8

It is different from the above case. In this case, a value is overwritten on a variable that is not needed elsewhere to compute the gradient. But in your above code, message_weight must have been used somewhere in some backward function to compute gradient. Hence the error.


(Zhang Yi) #9

Thanks for your kind reply, @SimonW


(Hao Tang) #10

Thanks,your answer works.


(Simon Wang) #11

fyi http://pytorch.org/docs/master/notes/autograd.html?highlight=saved_tensors#in-place-correctness-checks