Using torchvision resnet models and removing final layers

I’m using the pre trained torchvision resnet50 model but I’m confused if my implementation this will work properly. I want to remove the FC and adaptive avg pooling layers. Is writing it out like this sensible? I’m concerned this way will chain together the relevant layers but without having the identity shortcut connections. Unless children() retains the relevant calls in forward?

class ResNet(nn.Module):
    def __init__(self, hparams):
        super(ResNet, self).__init__()
        
        self.stripped_body = nn.Sequential(*list(models.resnet50(pretrained=True).children())[:-2])
        
    def forward(self, img):     
  
        x = self.stripped_body(img)
        return x

I’m not entirely sure so perhaps someone can confirm, but I think your way should work and not remove the skip connections. I feel you could also use something like this:

model = models.resnet50(pretrained=True)
model.avgpool = nn.Identity()
model.fc = nn.Sequential(nn.Linear(input_size, num_classes))

Edit: Or I guess in your case, setting

model.fc = nn.Identity()

as well.

1 Like

Thanks @AladdinPerzon I’m giving that a go to see if there’s a difference in results

That’s for fine tuning right? If you wish to train the entire network but only remove the last layers you wouldn’t want to have param.requires_grad = False

About fine tuning, is requires_grad=False on the parameters the expected method to freeze them?

I’ve been using that with a VGG feature extractor for a perceptual loss, but I was wondering if only passing the required parameters to the optimiser is the best way? or both?! :thinking: