Visualizing intermediate layers with pretrained model with nn.ModuleList

I am trying to visualize some intermediate (attention) layers from a network that I created myself.
I already tried the approaches from Accessing intermediate layers of a pretrained network forward? and Extract Features from models made with nn.ModuleList but to no avail.

Somehow I am not possible to iterate over my nn.ModuleList:

class Attention_Maps(nn.Module):
    def __init__(self):
        super(Attention_Maps, self).__init__()
        # Load my model
        model = myModelClass
        model = model.load_from_checkpoint("<path_to_my_model>.ckpt")
        # With vgg_modules it works, despite being the same type
        vgg_modules = list(vgg16(pretrained=True).features)
        image_modules = list(model.children())
        self.modules = nn.ModuleList(image_modules)

    def forward(self, x):
        results = []
        # TypeError here in enumerate
        for i, model in enumerate(self.modules):
            x = model(x)
            if i in {1, 3, 5, 7, 9}:
                results.append(x)
        return results

This results in the following error:
TypeError: 'method' object is not iterable

The weird thing is that vgg_modules and image_modules are the same type, so I don’t expect them to behave differently.
Am I doing something wrong?

Could you replace nn.ModuleList() with nn.Sequential() and then try ?

I tried that already as well. I need to do self.modules = nn.Sequential(*image_modules), but I get the same TypeError. This method also does not work with vgg_modules as nn.ModuleList.

Hi, I tried using nn.Sequential() and in my case for vgg_modules works,

import torch
import torchvision.models as models
import torch.nn as nn

vgg_modules = list(models.vgg16(pretrained=True).features)
modules = nn.Sequential(*vgg_modules)
result = []
x = torch.randn(1,3,42,42)

for i, mod in enumerate(modules):
    x = mod(x)
    # print("Passed layer {}".format(i))
    if i in {1, 3, 5, 7, 9}:
        result.append(x)
        if i>10:
            break

print("done.!!", len(result))

That is interesting! The code works when it is not in a class for me. So, unlike the example from the linked discussions that put the loaded model in a separate class and used the forward-method, your solution did the trick for me!

This works for me now:

model = model.load_from_checkpoint(f"<path-to-my-model>.ckpt")
image_modules = list(model.children())
modules = nn.Sequential(*image_modules)

results = []
for i, model in enumerate(modules):
    x = model(x)
    if i in {1, 3, 5, 7, 9}:
        results.append(x)
        if i >= 9:
            break
print("Done!", len(results))

It now also works with

image_modules = list(model.children())
modules = nn.ModuleList(image_modules)

Thanks!