Trying to reuse some of Resnet Layers but getting duplicate layers

I am trying to reuse some of the resnet layers for a custom architecture and ran into a issue I can’t figure out. Here is a simplified example; when I run:

import torch
from torchvision import models
from torchsummary import summary

def convrelu(in_channels, out_channels, kernel, padding):
    return nn.Sequential(
        nn.Conv2d(in_channels, out_channels, kernel, padding=padding),
        nn.ReLU(inplace=True),
    )


class ResNetUNet(nn.Module):
    def __init__(self):
        super().__init__()

        self.base_model = models.resnet18(pretrained=False)
        self.base_layers = list(self.base_model.children())


        self.layer0 = nn.Sequential(*self.base_layers[:3])
    

    def forward(self, x):
        print(x.shape)

        output = self.layer0(x)

        return output
    
base_model = ResNetUNet().cuda()
summary(base_model,(3,224,224))

Is giving me:

----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1         [-1, 64, 112, 112]           9,408
            Conv2d-2         [-1, 64, 112, 112]           9,408
       BatchNorm2d-3         [-1, 64, 112, 112]             128
       BatchNorm2d-4         [-1, 64, 112, 112]             128
              ReLU-5         [-1, 64, 112, 112]               0
              ReLU-6         [-1, 64, 112, 112]               0
================================================================
Total params: 19,072
Trainable params: 19,072
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 36.75
Params size (MB): 0.07
Estimated Total Size (MB): 37.40
----------------------------------------------------------------

This is duplicating each layer (there are 2 convs, 2 batchnorms, 2 relu’s) as opposed to giving one layer each. If I print out self.base_layers[:3] I get:

[Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False), BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True), ReLU(inplace=True)]

which shows just three layers without duplicates. Why is it duplicating my layers?

I am using pytorch version 1.4.0

I assume summary loops over all registered modules inside the model and prints them out.
Since you’ve registered resnet18 as self.base_model and again its first three layers as self.layer0 in an nn.Sequential container, these modules will be printed as duplicates.