Layer-wise Relevance Propagation (LRP) in PyTorch

I get the intuition behind LRP and I would like to implement in PyTorch. However, I’m not that familiar with the PyTorch internals to know how to best get started.

The website links to a LRP wrapper for Tensorflow. When looking at the code, it seems one has to wrap each layer to add some LRP specific code snippet. I assume the same would be required for PyTorch.

Does anyone already implemented LRP in Pytorch? Or does anyone has some opinions how to tackle this?

So what I did was to implement autograd.Functions that use the regular forward but implement the LRP rules for backward. Then I mirrored all nn.Modules my network uses. Worked for LSTMs (ULMFiT) and ResNets and some others for me.
Note that “unconstrained” LRP for ResNets does have continuity issues with the residual connections, here is a discussion involving some coauthors of some papers.

Best regards

Thomas

1 Like

@tom any chance you can share some example code for a module/layer? I’m still getting more into the details of PyTorch.

I had worked on this task. Here some results, not much but can considered as starting point.
https://github.com/dmitrysarov/LRP_decomposition/blob/master/LRP_notebook.ipynb

@vdw did you get to managed to implement LRP for PyTorch?
Best Regrads

If you are still looking for some explanation see heatmapping Seems to be a good starting point.