Extract feature maps from intermediate layers without modifying forward()

Hi,
I am interested in obtaining features from the intermediate layers of my model, but without modifying the forward() method of the model, as it is already trained. And also I don’t want to split it, because I am interested in getting the prediction, and the features from other upper layers as well in the same forward pass.
I have read about the register_forward_hook, but I haven’t found any example on how to use it.

I have this:

def get_features_hook(self, input, output):
    print output.data.cpu().numpy().shape
    features = output.data.cpu().numpy()

model.features.module[37].register_forward_hook(get_features_hook)
model.forward(im_tensor)

Is there any way to extract that features value?

Thanks

3 Likes

There are many examples in this thread:

Also a search will give you a few more examples:

https://discuss.pytorch.org/search?q=register_hook

1 Like

Hi,

I am trying to extract feature outputs of the intermediate layers of pre-trained VGG 16 architecture and concatenate them. The in-built models in pytorch doesn’t have names for all its layers for VGG architecture. Therefore, I am unable to use register_forward_hook. Is there any other alternate way?

I am trying to use something like below but I am not sure if gradients will be accumulated at the intermediate layers when I do back propagation as it is building up two computation graphs during instantiation of my model.

I have something like this

def __init__(self):
        super(ModifiedVGG, self).__init__()
        k = [29, 22]
        model = models.vgg16(pretrained=True)
        self.layer1 =  nn.Sequential(*list(model.features.children())[:k[0]])
        self.layer2 =  nn.Sequential(*list(model.features.children())[:k[1]])
        self.conv1 = nn.Conv2d(512, 128, 3, padding=1)
        self.upsample1 = nn.UpsamplingBilinear2d(scale_factor=2)
def forward(self, x):
        conv_4_3 = self.layer2(x)
        conv_4_3 = F.relu(self.conv1(conv_4_3))
        conv_5_3 = self.layer1(x)
        conv_5_3 = F.relu(self.conv1(conv_5_3))
        conv_5_3 = self.upsample1(conv_5_3)
        concat_features = torch.cat([conv_4_3, conv_5_3], 1)
        return concat_features

Please let me know if this would work or not.

Thank you!

2 Likes

the simplest solution is to:

  • copy over the vgg file
  • change the name of the class
  • modify the class (for example remove or replace layers as you see fit)
  • use the function load_state_dict to load the original VGG weights dict into this modified class.

This is less error prone, and it’s easily modifiable.
The vgg weights state dict is available in this URL: https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py#L12-L21

1 Like

Thank you. This would be the least error prone of all. Also, when I use register_forward_hook, do I need to worry about backward_hook or it will be taken care off automatically? Some more examples on register_hook would be appreciated. Thank you.

Thanks a lot this code works for me!

Hey @smth, I have looked over a lot, but couldnt find an example of this method. Could you please point to any of it? I am just starting pyTorch, and couldnt find much tuts on how about using this function for transfer learning.