Obtaining a gradient of a metric obtained by the backward pass

For my research I’m trying to obtain a derivative of a metric obtained by using XAI method called LRP that was calculated using a backward pass on the network. Here is the github to the LRP implementation I am using.

I have gotten myself familiar with the Pytorch’s autograd engine but I am not sure how I would implement this because this implementation wraps around standard Pytorch layers (Conv2D and Linear) and uses custom autograd.Functions to propagate relevance through the backward pass instead of the gradient. I would somehow need to propagate backward through the graph that was created by the forward pass and the first backward pass.

Any ideas on how you would implement this?