Layer by layer backpropagation to modify the chain rule

Hello,

In the backward pass of training a DNN, I need to compute the gradient layer by layer and modify the gradient before passing it to the previous layer.

It would be great, if you could help me please!

Hi,

If you want to work at the nn.Module level, you can use backward hooks so that you can run arbitrary code alongside the backward pass: Module — PyTorch 1.8.1 documentation