Can't i change the activation value of a model before loss.backward operation?

Hi. While training with CIFAR-10 dataset and ResNet-20, I want to change the activation value in backpropagation, but I got the following error.

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [128, 16, 32, 32]], which is output 0 of ReluBackward1, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Can’t I change the activation value saved in the forward path during the backward path?

Thank you.

You can change it. But if you change it inplace and something else needed the original value (the relu in this case), then the relu cannot compute its backward anymore. Hence the error.

Thank you for your reply. Then, is there no way to use the changed activation instead of the existing activation in calculating the gradient?

But then you won’t compute gradients anymore. That is fine?

I want to see the code how to calculate the gradient from an existing built-in function(for example, torch.nn.Conv2d). If that’s possible, Maybe i can change the activation value to whatever i want in calculating the gradient

There is no unified way to compute these.
But even if you do this, you won’t computes gradients anymore.

That’s too bad. Okay. Thank you for your answer.