Heatmap localization

i am trying to generate a heatmap for x-ray image but i don’t know how to get weights of pooling layer of the trained model
i tried some images but the output image looks like a corrupted image.

this is the code…

class HeatmapGenerator ():
    
    #---- Initialize heatmap generator
    #---- pathModel - path to the trained densenet model
    #---- nnArchitecture - architecture name DENSE-NET121, DENSE-NET169, DENSE-NET201
    #---- nnClassCount - class count, 14 for chxray-14

    model = DenseNet()
    def __init__ (self, pathModel, nnArchitecture, nnClassCount, transCrop):
       
        #---- Initialize the network
        if nnArchitecture == 'DENSE-NET-121': model = densenet121(False).cuda()
        elif nnArchitecture == 'DENSE-NET-169': model = densenet169(False).cuda()
        elif nnArchitecture == 'DENSE-NET-201': model = densenet201(False).cuda()
          
        model = torch.nn.DataParallel(model).cuda()

        modelCheckpoint = torch.load(pathModel)
        model.load_state_dict(modelCheckpoint['best_model_wts'], strict=False)

        self.model = model.module.features
        self.model.eval()
        
        #---- Initialize the weights
        self.weights = list(self.model.parameters())[-2]

        #---- Initialize the image transform - resize + normalize
        normalize = transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
        transformList = []
        transformList.append(transforms.Resize(transCrop))
        transformList.append(transforms.ToTensor())
        transformList.append(normalize)      
        
        self.transformSequence = transforms.Compose(transformList)
    
    #--------------------------------------------------------------------------------
     
    def generate (self, pathImageFile, pathOutputFile, transCrop):
        
        #---- Load image, transform, convert 
        imageData = Image.open(pathImageFile).convert('RGB')
        imageData = self.transformSequence(imageData)
        imageData = imageData.unsqueeze_(0)
        
        input = torch.autograd.Variable(imageData)
        
        self.model.cuda()
        output = self.model(input.cuda())
        
        #---- Generate heatmap
        heatmap = None
        for i in range (0, len(self.weights)):
            map = output[0,i,:,:]
            if i == 0: heatmap = self.weights[i] * map
            else: heatmap += self.weights[i] * map
        
        #---- Blend original and heatmap 
        npHeatmap = heatmap.cpu().data.numpy()

        imgOriginal = cv2.imread(pathImageFile, 1)
        imgOriginal = cv2.resize(imgOriginal, (transCrop, transCrop))
        
        cam = npHeatmap / np.max(npHeatmap)
        cam = cv2.resize(cam, (transCrop, transCrop))
        heatmap = cv2.applyColorMap(np.uint8(255*cam), cv2.COLORMAP_JET)
              
        img = heatmap * 0.5 + imgOriginal
            
        cv2.imwrite(pathOutputFile, img)

the bad images:
00000250_011_heatmap

1 Like

Pooling layers don’t have any parameters, so could you explain what kind or weights you would like to get and how they would be used?

i am trying to generate heatmap with class activation map, so i need values from my trained model but i am not sure the values of which layer should i use…i searched on google and found that i need the values of last convolution layer…and i don’t know how to access this layer to get it’s value.
i am still beginner in deep learning that’s why i am facing difficulties.

Are you trying to implement Grad-CAM?

yes but i have a little experience with it…the problem is that i need some values from my trained model but i don’t know which of them and how to get them

You don’t need the weights (parameters) of a layer to compute the heatmap with gradcam. What you actually need is the activation maps (output of a layer). You can retrieve that with a forward hook torch.nn.modules.module.register_module_forward_hook — PyTorch 1.8.1 documentation

If you refer to the original grad-cam paper (https://arxiv.org/pdf/1610.02391.pdf) there are three steps to compute the activation map:

  1. Find the activations A and y of a conv (or other) layer and the output layer (done with a forward hook)
  2. compute the importance weights alpha = mean(dy/dA) (Eq. 1)
  3. compute the activation map ReLU(sum(alpha*A)) (Eq.2)

first thing i have x-ray iamge

and want to get final conv-layer in denesenet model
theb get the output of this layer so i can pass the images in it and get
output of this layer then use it in my next process

i can do it using keras by this way but wan to try this way in pytorch and cannot write it
by pytorch libaray

keras code:

# Get the 512 input weights to the softmax.
    class_weights = model.layers[-1].get_weights()[0]
    final_conv_layer = get_output_layer(model, "bn")
    get_output = kb.function([model.layers[0].input], [final_conv_layer.output, model.layers[-1].output])
    [conv_outputs, predictions] = get_output([np.array([img_transformed])])
    conv_outputs = conv_outputs[0, :, :, ]

To get the intermediate activation output of a layer you can use forward hooks as described by @carloalbertobarbano. This post gives an example how to use them.