I am trying to do the equivalent of
I have some values which I want to assign to a larger tensor and then autograd through it.
I found this issue How to achieve tf.scatter_nd in pytorch? project for copynet but it did not help me much.
I tried the following:
x = Variable(torch.zeros(5, 4), requires_grad=True)
x[idx[:, 0], idx[:, 1]] = myvalues
It complains that:
a leaf Variable that requires grad has been used in an in-place operation.
I also tried it with scatter as a test and it fails as well.
x.scatter_(0, idx[:, 0], myvalues)
Is there no way in pytorch to achieve the behaviour of
I tried for fun extending autograd since it seems like this operation really doesn’t exist.
Can someone confirm me if this is correct? I tested it and it seems to correctly propagate back the gradients.
from torch.autograd import Function
def forward(ctx, idx1, idx2, idx3, values, shape):
# ctx is a context object that can be used to stash information
# for backward computation
ctx.idx1 = idx1
ctx.idx2 = idx2
ctx.idx3 = idx3
outtensor = torch.zeros(shape)
outtensor[idx1, idx2, idx3] = values
def backward(ctx, grad_output):
# We return as many input gradients as there were arguments.
# Gradients of non-Tensor arguments to forward must be None.
return None, None, None, grad_output[ctx.idx1, ctx.idx2, ctx.idx3], None
Hello, I want to ask if your implementation of tf.scatter in pytorch is right. I want to do this operation in pytorch, but I cannot find a suitable function in pytorch.