Model._modules[] to find a layer

I’ve trained a DenseNet121 model, and am trying to get ahold of one of the final layers for a GradCAM implementation. The implementation uses model._modules[] as a dict from layer name to layer, but for some reason, my model has only one key-value pair in this dict: “module” to the entire DenseNet object. What should I do instead? My version is 1.2.0.

Thanks!

Which layer would you like to get?
This code would index a specific conv layer inside the model:

model = models.densenet121()
model.features.denseblock4.denselayer2.conv2

Your mentioned method is an internal method and thus shouldn’t generally be used.

I’m looking for the a layer to create class activation maps, so probably one of the last ones like “module.densenet121.features.denseblock4.denselayer16.conv.2” or “module.densenet121.features.norm5”

So would the approach of directly accessing the modules work or are you still stuck?

My model is wrapped in torch.nn.DataParallel(model). How do I get “under the hood” to access the Densenet? I think this is probably the issue with the first part of the issue

You could access the model via model.module.features...

Oops! silly mistake on my part. Appreciate it!

what about Sequential, getting AttributeError: ‘Sequential’ object has no attribute ‘module’.

The .module attribute will be added by nn.DataParallel for all models including nn.Seuqential:

model = nn.Sequential(
    nn.Linear(1, 1),
    nn.ReLU(),
    nn.Linear(1, 1))

model = nn.DataParallel(model)
print(model.module)
> Sequential(
  (0): Linear(in_features=1, out_features=1, bias=True)
  (1): ReLU()
  (2): Linear(in_features=1, out_features=1, bias=True)
)

I am working on classification of five different types of images of skin and I trained the AlexNet model and GoogleNet Model. Now I wanted to implement the Grad-CAM on the final layer of the these models and want to get the heat-map.

after reading many blogs and coding material I reach here. This code is specifically use resnet50 model.

model = models.resnet50(pretrained=True)
grad_cam = GradCam(model=model, feature_module=model.layer4, \
                   target_layer_names=["2"], use_cuda=args.use_cuda)

How should I pass the feature_module and target_layer_names to constructor of the grad_cam class for AlexNet and for GoogleNet.

I realized I need to get the final layer from the Pretrain Network by using the following code.

 new_classifier = nn.Sequential(*list(model.classifier.children())[:-1])
 model.classifier = new_classifier  

How can I use this way to pass as parameter to the grad_cam class.