My goal is to implement the gradient * input relevance algorithm manually on my model. I know I can get the input of any layer using forward hooks. The problem is when creating a backward hook to obtain the gradients, I can not obtain the input and output values of the layer in the backward hook. What is the best way to obtain the input values of the layer in the backward hooks?
def __fw_hook(self, layer, input, output):
self.input = input
self.output = output
def __bw_hook(self, layer, input_grads, output_grads):
self.input_grads = input_grads
self.output_grads = output_grads
# How to get the actual input in the current layer?