Accessing retained values in computation tree for LRP?

I’d like to implement layerwise relevance propagation (a saliency method that is different from backprop) automatically by taking advantage of the autograd compuatation tree, but I can’t figure out if there is a way to access retained values in pytorch.

Is there a way of walking the Function tree after a forward computation and reading out retained values?

For example, I’d like to know what the inputs (and/or outputs) of a given function node are for the various built-in functions such as AddmmBackward or ThresholdBackward1, but it appears that the various retained values are retained in C code and don’t seem to be visible to python.

1 Like

Hi,

Walking the three is possible using the output’s .grad_fn property to get the first functions, and using the .next_functions property to get higher in the three.
For Functions implemented in python, you can use this python object to access tensors that were saved with save_for_backward by using the saved_tensors or saved_variables properties.
For Functions implemented in C, it depends on the implementation and most of them do not expose these values.