GradCam - Zero output

Hi @Francesco_Grimaldi, this is a surprising result. To understand the cause of this further, I’d recommend looking at the activations and gradients separately of the last convolutional layer, since the product of these leads to GradCAM. You can do this with Captum using LayerActivation and dividing LayerGradientXActivation by LayerActivation for the gradient.

It could be that gradients of the target class of 4 with respect to the final conv output is mostly negative, potentially because the output is saturated, so the gradients are not representative of the “importance” of the corresponding activation. Looking at the gradients and activations separately will be helpful to further understand this.

Also, Captum does offer GradCAM itself as LayerGradCAM (since it technically outputs attributions with shape matching the layer output) with an argument relu_attributions to decide whether to apply a ReLU to the final attribution.