IntermediateLayerGetter get layer result not desired #3048

:bug: Bug

self.backbone = mobilenetv2.mobilenet_v2(
            pretrained=None, output_stride=output_stride)
        # print(backbone.features)
        # rename layers
        # self.backbone.p3 = self.backbone.features[0:2]
        self.backbone.low_level_features = self.backbone.features[0:4]
        self.backbone.p3 = self.backbone.features[4:7]
        self.backbone.p4 = self.backbone.features[7:11]
        self.backbone.p5 = self.backbone.features[11:14] # 64
        self.backbone.high_level_features = self.backbone.features[14:-1]
        # self.backbone.p6 = self.backbone.features[-1:]
        self.backbone.features = None
        self.backbone.classifier = None

        self.backbone = torchvision.models._utils.IntermediateLayerGetter(self.backbone, {
            'low_level_features': 'low',
            'p3': 0,
            'p4': 1,
            'p5': 2,
            # 'p6': 2,
            'high_level_features': 'out',
        })

the output p3 p4 p5 w and h not desired:

low torch.Size([1, 24, 128, 184])
0 torch.Size([1, 32, 64, 92])
1 torch.Size([1, 64, 64, 92])
2 torch.Size([1, 96, 64, 92])
out torch.Size([1, 320, 64, 92])

If I forward mobilenet, the layer I extracted from should w, h all be half:

from torchvision.models.mobilenet import mobilenet_v2

def main():
    m = mobilenet_v2()
    a = torch.rand([2, 3, 512, 1024])
    for l in m.features:
        a = l(a)
        print(a.shape)

torch.Size([2, 32, 256, 512])
torch.Size([2, 16, 256, 512])
torch.Size([2, 24, 128, 256])
torch.Size([2, 24, 128, 256])
torch.Size([2, 32, 64, 128])
torch.Size([2, 32, 64, 128])  <--- get this layer
torch.Size([2, 32, 64, 128])
torch.Size([2, 64, 32, 64])
torch.Size([2, 64, 32, 64])
torch.Size([2, 64, 32, 64])
torch.Size([2, 64, 32, 64]) <--- get this layer
torch.Size([2, 96, 32, 64])
torch.Size([2, 96, 32, 64])
torch.Size([2, 96, 32, 64]) <--- get this layer
torch.Size([2, 160, 16, 32])
torch.Size([2, 160, 16, 32])
torch.Size([2, 160, 16, 32])
torch.Size([2, 320, 16, 32])
torch.Size([2, 1280, 16, 32])

As you can see, p4 layer, which is index with 1, should with size of 2, 64, 32, 64 but I got it size [1, 64, 64, 92]

To be clear, this layer w, and h should be half then last layer, but it is same.

What the mistake I got here or it’s a bug a IntermediateLayerGetter?

Can anyone help me out??? Need help!

Double post from here with a follow-up.

Thanks for the link.

The Github code for IntermediateLayerGetter comes with this warning and is repeated below for convenience,

It has a strong assumption that the modules have been registered
    into the model in the same order as they are used.
    This means that one should **not** reuse the same nn.Module
    twice in the forward if you want this to work.

Could you please elaborate this warning with possibly a very short dummy code where this approach should not be used.

Thank you.