Access layers in pretrained Resnet50

Hi, I am using the pretrained Resnet50 wrapped by pretrainedmodels (https://pypi.org/project/pretrainedmodels/)。

Now I want to access the intermidate result of each layer, I try the following code:

for module_pos, module in self.model._modules.items():
    if module is not None:
        x = module(x)  # Forward

However, there are two issues.

First, some of the layers in self.model._modules.items() is None (that is why I use if module is not None:

Second, when I run the code, there is an error at the very last layer.

RuntimeError: size mismatch, m1: [2048 x 1], m2: [2048 x 1000] at /opt/conda/conda-bld/pytorch_1573049306803/work/aten/src/TH/generic/THTensorMath.cpp:197

How comes the pretrained model encountered with this problem? I know reshape m1 to [2048] may solve the problem. But why does this happen?

Thank you very much!

Hi,

Such chaining of each layer one by one is only expected to work if the original module is a nn.Sequential() as this is how the forward method is implemented here.
For general modules, you should check how the forward method is implemented to know how they should be used.
In the case of resnet, the repo you linked seem to use the torchvision version, whose forward function can be found here. It looks like by change, because the modules were defined in the right order, it works up until the .reshape(). But this is only by chance and is not guaranteed.
I think the best way to access internal values here would be to change the forward method to either return or print the intermediate values you’re interested in.

1 Like

Thanks very much! Really appreciate.

I copied the forward code from the link you provided exactly and still got a size mismatch error at that point…

Oh it seems that torch.flatten(x, 1) was just not expecting a batch size of 1 input, so changing it to torch.flatten(x) fixed it. :slight_smile:

1 Like