Autograd computation time error: One of the variables needed for gradient computation has been modified by an inplace operation

I have a torch.autograd.variable.Variable named “unw” of shape 6x64x300 and another torch.autograd.variable.Variable named “powers” of shape 6. I need to compute the power of each of the 6 matrices in “unw” with each of the 6 values in “powers”. So I wrote down the below code:

for j in range(unw.shape[0]):
     unw[j,:,:] = torch.pow(unw[j,:,:],powers[j])

After computing the new “unw” I compute

s = torch.prod(unw,dim=0)

to get a 64x300 valued tensor.

Later when computing torch.autograd.grad(loss, self.model.parameters()), I get an error as below

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

This is caused to due the change of “unw”. How can I resolve this?

Hi,

I think the simplest would be:

unw = torch.pow(unw, powers.view(-1, 1, 1).expand_as(unw))

That removes both the inplace and the for-loop :slight_smile:

1 Like

Thank you so much! This is perfect :smiley: :smiley: I also had to do a .contiguous()

powers.contiguous().view(-1,1,1).expand_as(unw)
1 Like