I’m a bit newbie with Captum, but I’ve trained a Pytorch densenet121 based classified for 224x224 medical images. Four output classes. Now I’d like to interpret results with Captum’s Guided GradCAM. Since Densenets have dense blocks and multiple interconnections, I’m a bit unsure what is the ultimately last conv layer that I should give as an argument. I’ve tried following:
gc = GuidedGradCam(net, net.features.denseblock4.denselayer16.layers.conv2)
attr_gc = attribute_image_features(gc, input, target=labels[ind])
viz.visualize_image_attr(attr_gc,
original_image,
method="heat_map",
sign="all",
show_colorbar=True,
title="Guided GradCAM")
where the helper is similar to a Captum example:
def attribute_image_features(algorithm, input, target, **kwargs):
net.zero_grad()
tensor_attributions = algorithm.attribute(input, target=target, **kwargs)
if torch.cuda.is_available():
torch.cuda.empty_cache()
# convert to numpy C,W,H
attr = np.transpose(tensor_attributions.squeeze(0).cpu().detach().numpy(), (1, 2, 0))
return attr
But for some reason I get an attribution heatmap that kind of only displays values in some blocks in the image (see attachment).
Why is that and what should I fix? Thanks!