How to modify and rewrite the activation output of a layer before applying the output to next layer?

LeNet architecture is like this:

Suppose, in forward pass, I want to modify the output of RELU1 and then rewrite the modified version before going to POOL1 layer so that it would reflect on backward pass.

In pytorch, how can I achieve this?

You could use forward hooks as described here and manipulate the output inplace in the hook.

Thank you for your help.
But I fixed this by declaring custom ReLU class to make an effect on backward pass.