Visualising feature map for pretrained model

I am building one model which has two modules where one takes entire image to extract features and the other one takes only image both these modules run in parallel and at the end all the features are concatenated using fc layer and finally classified.
Now for both the modules as of now I am using pretrained model as vgg16 and while training I wanted to visualize the feature maps for the modules. I know how to write code for it basically registering hooks for specific layer for which we want to visualize it but the main concern in my case is I am directly using pretrained model which does’t have any attribute or layer names which I can use in activation function (while passing layer name).
So now how will extract the layer name or what attribute exactly I will need to pass in activation function.

Below is the code for my model.

import torch
import torch.nn as nn
import torchvision.models as models


class createModel_Image(nn.Module):
    def __init__(self):
        super(createModel_Image, self).__init__()
        #model = models.alexnet(pretrained=True)
        model = models.vgg16(pretrained=True)
        
        self.model= nn.Sequential(*list(model.children())[:1])
        for param in self.model.parameters():
            param.requires_grad = False
        
    
    def forward(self, imgs):
        out = self.model(imgs)
        
        return out
        
class createModel_Body(nn.Module):
    def __init__(self):
        super(createModel_Body, self).__init__()
        #model = models.alexnet(pretrained=True)
        model = models.vgg16(pretrained=True)
        self.model= nn.Sequential(*list(model.children())[:1])
        
        for param in self.model.parameters():
            param.requires_grad = False
        

    def forward(self, imgs):
        out = self.model(imgs)
        
        return out
        
        
class BaseModel(nn.Module):
    def __init__(self,args):
        super(BaseModel, self).__init__()
        self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
        self.args = args
        DROP_FIRST_CLASS = 256
        NUM_CONCEPTS = 3
        NUM_CLASSES = 26

        self.imageModel = createModel_Image()
        print(self.imageModel.model[0][28])
        
        
        self.i_features = 25088 #12544
        
        self.bodyModel = createModel_Body()
        #self.conv = self.bodyModel.model[0][28]
        
        self.b_features = 25088 #12544
        ### initialize the BaseModel ---------------------

            
        NUM_FEATS = self.i_features + self.b_features
        
        self.fusion = nn.Sequential(
                nn.Linear(NUM_FEATS, DROP_FIRST_CLASS))
        
        self.category = nn.Linear(DROP_FIRST_CLASS, NUM_CLASSES)
        
        self.cont = nn.Linear(DROP_FIRST_CLASS, NUM_CONCEPTS)
        self.sigmoid = nn.Sigmoid()

        ### ----------------------------------------------

    def forward(self, imgs, imgs_body):
        
        out_image = self.imageModel(imgs)
        out_body = self.bodyModel(imgs_body)
        
        out_image = out_image.view(-1,512*7*7) # 256 * 7 * 7
        out_body = out_image.view(-1,512*7*7) # 256 * 7 * 7
        
        x = torch.cat((out_image, out_body), 1)
        x = self.fusion(x)
        out_category = self.sigmoid(self.category(x)) 
        out_cont = self.sigmoid(self.cont(x))
        
        return out_category, out_cont 

I wanted to access last convolutional layer (or any conv) of both the imageModel and bodyModel to visualize their feature maps.

If I am understanding this right, you could make the hook a functools.partial function to pass the name of the layer. This could be obtained by installing the hooks in a loop over for name, m in model.named_modules():.

Best regards

Thomas

I tried the way you have specified and got below output

bodyModel.model.0.0 Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
bodyModel.model.0.1 ReLU(inplace)
bodyModel.model.0.2 Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
bodyModel.model.0.3 ReLU(inplace)
bodyModel.model.0.4 MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
bodyModel.model.0.5 Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))

but now when I try to use say ''bodyModel.model.0.5" as attribute name for registering hook it gives me error as the attribute is invalid.

model.bodyModel.model.0.21.register_forward_hook(get_activation('bodyModel.model.0.21'))

Try m.register_forward_hook(get_activation(name)). You can also sprinkle if isinstance(m, nn.Conv2d) or so if you are only interested in the output of specific types of modules.

this worked…thanks @tom…but now currently feature maps are of size 28 * 28 and I want to make it equal to input image size as 224 * 224…how I can do that?

That depends on your problem domain and how you arrived at these things.
For some networks (you mention vgg), just upscaling might be a good approximation.

Best regards

Thomas