Change vector value while holding gradient

Hi,

I’m trying to create my own layer. The layer does the following: it multiplies certain values if the input tensor and if the new value is bigger than the value at a given index, the current value gets replaced with this new value.

The forward function of the layer looks like this:

for vect in x:
         for index, child_indexes in self.multi_order:
                val = torch.ones(1, requires_grad=True)
                for i in child_indexes:
                    val = val * (1 - vect[i])
                val = 1 - val
                vect[index] = torch.max(vect[index], val)
        return vect

When I try to train a model on this, I get the flowing error

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor ], which is output 0 of SelectBackward, is at version 106432; expected version 106431 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

The location where the error occurs is obviously on this line:

vect[index] = torch.max(vect[index], val)

How can I fix this error ? And am I correctly setting the requires_grad parameters ?

Thanks in forward

You should set the requires_grad parameter to True only if you want to get the gradients for a given Tensor by reading it’s .grad field (or passing it to autograd.grad). So here I don’t think you want to set it.

I am not sure to understand your function. You have a return at the end of the first iteration of the loop? Also what does self.multi_order contains?