How to get the output of the last but one convolutional layer of DensetNet121?

I’m trying to reimplement a paper about bilinear pooling(Link hereHierarchical Bilinear Pooling). So I need to get the intermediate output of some convolutional layers. But when I tried this with DenseNet121, I found that it’s difficult to do that as the forward pass of DenseNet is quite different with VGG or ResNet. For example, I want to get the outputs of the model.features.denseblock4.denselayer2 .
When I tried to achieve that in this way:

class DenseNet121(nn.Module):
    def __init__(self, num_classes=102):
        self.model = models.densenet121(pretrained=True)
        self.layer3 = nn.Sequential(*list(self.model.children()))[-2][:-2]
        self.layer4_1 = nn.Sequential(*list(self.model.children()))[-2][-2].denselayer1
        self.layer4_2 = nn.Sequential(*list(self.model.children()))[-2][-2].denselayer2
        num_ftrs = self.model.classifier.in_features
        self.model.classifier = nn.Sequential(
            torch.nn.Linear(num_ftrs, num_classes)
        # # Initialize the fc layers.
        if self.model.classifier[1].bias is not None:
            torch.nn.init.constant_(self.model.classifier[1], val=0)

    def forward(self, X):
      X = self.layer3(X)
      X = self.layer4_1(X)
      X = self.layer4_2(X)
      return X

It would raise an error:

   2056     return torch.batch_norm(
   2057         input, weight, bias, running_mean, running_var,
-> 2058         training, momentum, eps, torch.backends.cudnn.enabled
   2059     )

RuntimeError: running_mean should contain 32 elements not 544

I know this error may cause by input dimension mismatch of batch_norm(without adding the num_features before), how can I fix this? Or I made some mistakes somewhere?

I also tried to modify the forward process of the original network in the same way. It turned out the same.

Note: What I really need is the output of model.features.denseblock4.denselayer15 and model.features.denseblock4.denselayer16. I’ll appreciate that if anyone can tell me how to achieve this. Thanks in advance :pray: @ptrblck

Wrapping the child modules into nn.Sequential containers might not work for a lot of modules, as they might use functional calls in their original forward method, which would be lost.
You could derive a new model and manipulate the forward method instead (unsure, what kind of errors you were getting since returning the intermediates shouldn’t raise a shape mismatch) or you could use e.g. forward hooks alternatively.