Get input and output on backward hook

My goal is to implement the gradient * input relevance algorithm manually on my model. I know I can get the input of any layer using forward hooks. The problem is when creating a backward hook to obtain the gradients, I can not obtain the input and output values of the layer in the backward hook. What is the best way to obtain the input values of the layer in the backward hooks?

  def __fw_hook(self, layer, input, output):
      self.input = input
      self.output = output
  def __bw_hook(self, layer, input_grads, output_grads):
      self.input_grads = input_grads
      self.output_grads = output_grads
      # How to get the actual input in the current layer?

You could try to use e.g. a global dict storing the forward activations and index it in the backward hook, but this approach sounds a bit hacky.
A cleaner way might be to implement custom autograd.Functions which would allow you to access the forward activations in the backward function directly, but you might then also need to reimplement the actual backward pass which also sounds like unnecessary work.

I’m weighing both options, but I think I’ll simply use the first one. Thank you for your response!