How to modify forward hook output correctly?

Hi, I know this question has already been answered previously, but in my case I go a bit deeper as my output type is not a tensor but a tuple.

Let my hook be:

def __inout_hook(self, layer, input, output):
    outc = output
    outc[0] += 2

    return outc

where output is a tuple. In order to modify this output, I use the code above. Note how I create a new variable as I cannot directly modify the first element of the tuple directly.

The thing is, when calculating the gradients by using .backward(), the following error jumps:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [32, 256, 75, 17]], which is output 0 of ReluBackward0, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

I wonder how would I correctly modify the output of a layer’s hook accordingly.

Thank you beforehand.

Hi @gggg111,

The error message is indicating that there is an in-place modification of a tensor in the input or output of a layer, which is preventing the calculation of gradients during backpropagation.
To fix this issue, you can create a copy of the output tensor and modify the copy instead of modifying the original tensor in-place. Can you try below code? or Resolved reference link

def __inout_hook(self, layer, input, output):
    outc = output.clone()
    outc[0] += 2

    return outc

Hi, @nkdatascientist.

Thank you very much for your answer!

Because the output argument is of type tuple (and not tensor), I cannot use the .clone() method directly. So, what I just did, while seeming rudimentary, works by simply making the following intermediate steps like so:

def __inout_hook(self, layer, input, output):
    output0 = output[0].clone()
    output0 += 2

    self.output = (output0, output[1])

    return self.output

If there is a better way to to this, don’t hesitate to comment!

Much appreciated.