How to modify and rewrite the activation output of a layer before applying the output to next layer?

LeNet architecture is like this:
CONV1->RELU1->POOL1-> CONV2->RELU2->POOL2)-> FC1->RELU->…

Suppose, in forward pass, I want to modify the output of RELU1 and then rewrite the modified version before going to POOL1 layer so that it would reflect on backward pass.

In pytorch, how can I achieve this?

You could use forward hooks as described here and manipulate the output inplace in the hook.

Thank you for your help.
But I fixed this by declaring custom ReLU class to make an effect on backward pass.