I’d like to implement layerwise relevance propagation (a saliency method that is different from backprop) automatically by taking advantage of the autograd compuatation tree, but I can’t figure out if there is a way to access retained values in pytorch.
Is there a way of walking the Function tree after a forward computation and reading out retained values?
For example, I’d like to know what the inputs (and/or outputs) of a given function node are for the various built-in functions such as AddmmBackward or ThresholdBackward1, but it appears that the various retained values are retained in C code and don’t seem to be visible to python.