Question on extracting intermediate features from pretrained models

I am trying to get intermediate features from a pretrained resnet. As per this post https://discuss.pytorch.org/t/how-to-extract-features-of-an-image-from-a-trained-model/119/3 we can just do something like this

new_classifier = nn.Sequential(*list(model.classifier.children())[:-1])

Looking at the code for pretrained resnet50, it is broken up into several blocks. The forward function then take these blocks and apply batchnorm, relu, maxpool, reshape, etc to them.

    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        x = self.avgpool(x)
        x = x.reshape(x.size(0), -1)
        x = self.fc(x)

So if the pretrained model is not just one large nn.Sequential module, does the above way of extracting features work? Will the new classifier apply the relu,maxpool,reshape,etc in this manner?

It generally depends on the original implementation, i.e. if the base model was implemented using nn.Sequential (or in a very similar manner), this approach might work.
However, if some functional calls were used in the forward method, you will lose them.

In the example of resnet50, this approach will work, since even the non-linearity is defined as a module.