Use gradcam gradients during training

Hi,

I’m trying to implement a custom training procedure with gradcam maps involved. In particular, I want to use them inside the loss function.
To do so, the generated maps need to track the activation function, to perform the backpropagation.

The question I’m asking is: Is it possible to ask Pytorch to keep gradients while tensors are manipulated by a library?

Is there another way around for keeping gradients in the heatmap tensors produced by a gradcam library without reimplementing it from scratch? Or do I have to do the re-implementation keeping always the gradients (so, practically speaking, checking if the feature maps taken with hooks, which are tensors, are always being manipulated with grad_fn with them?)

Thanks in advance!