Extracting Features

Asking for help if we have a means to extract the same features present the forward_features functionality of timm for models created directly using the models subpackage or we have to use hooks? E.g. To clarify here is a simple scenario, we have a model created using timm, then we can extract the features by calling the forward_features.

import torch
import timm

m = timm.create_model('resnet50', pretrained=True)
o = m(torch.randn(2, 3, 299, 299))
print(f'Original shape: {o.shape}')
o = m.forward_features(torch.randn(2, 3, 299, 299))
print(f'Unpooled shape: {o.shape}')
print(o)

Output

Original shape: torch.Size([2, 1000])
Unpooled shape: torch.Size([2, 2048, 10, 10])

tensor([[[[0.0000, 0.0000, 0.0000,  ..., 0.0000, 0.0000, 0.0000],
          [0.0000, 0.0000, 0.0000,  ..., 0.0000, 0.0000, 0.0000],
          [0.0000, 0.0000, 0.0000,  ..., 0.0000, 1.9479, 1.7675],
          ...,
          [0.0000, 0.0000, 0.0000,  ..., 0.0000, 0.0000, 0.0000]]]],
       grad_fn=<ReluBackward0>)

I want to get the same feature from a pretrained model created directly from models subpackage , I used forward_hook, but the outputs are not the same.

from torchvision import models
model = models.resnet50(pretrained=True)

def getActivation(name):
  # the hook signature
  def hook(model, input, output):
    activation[name] = output.detach()
  return hook

# register forward hooks 
h1 = model.layer4.register_forward_hook(getActivation('layer4'))
# forward pass -- getting the outputs
out = model(torch.randn(2, 3, 299, 299, requires_grad = True))
print(f'Original shape: {out.shape}')

print(f'Unpooled shape: {activation["layer4"].shape}')

Output:

Original shape: torch.Size([2, 1000])
Unpooled shape: torch.Size([2, 2048, 10, 10])

tensor([[[[0.7899, 1.9169, 0.5745,  ..., 0.0000, 0.0000, 0.0000],
          [0.0000, 1.4325, 0.4367,  ..., 0.0000, 0.0000, 0.0000],
          [1.5453, 1.6186, 0.0000,  ..., 0.0000, 0.0000, 0.0000],
          ...,
         [0.0000, 0.0000, 0.0000,  ..., 0.0000, 0.0765, 0.2392]]]])

Thank you in advance for your help.

It seems you are using random input tensors in all use cases without seeding the code.
Based on this I would expect to see random results, so did you try to use static tensors instead?

hi @ptrblck yes, I tried it with static tensor. Here is the complete code and output from each model. I also wrote simple class to extract feature but still the output from the models subpackage is different from the model created using timm.

import torch
import torchvision
from torchvision import transforms
from torch.utils.data import DataLoader
from torchvision import datasets, models
import timm
import os

data_dir = 'ImageNet/'
batch_size = 1

transform_test = {
        'test': transforms.Compose([
            transforms.Resize([224, 224]),
            transforms.ToTensor(),
        ]),
}

testset = datasets.ImageFolder(os.path.join(data_dir,'test'), transform_test['test'])
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=1)

model1 = timm.create_model('resnet50', pretrained=True)
model2 = models.resnet50(pretrained=True)

activation = {}
def getActivation(name):
  # the hook signature
  def hook(model, input, output):
    activation[name] = output.detach()
  return hook

#Other method to extract feature
class ResnetFeatureExtractor(torch.nn.Module):
    def __init__(self, model):
        super(ResnetFeatureExtractor, self).__init__()
        self.model = model
        self.feature_extractor = torch.nn.Sequential(*list(self.model.children())[:-2])

    def __call__(self, x):
        return self.feature_extractor(x)

hook1 = model2.layer4.register_forward_hook(getActivation('layer4'))

#Extracting feature from both models without using forward hook.
model3 = ResnetFeatureExtractor(model1)
model4 = ResnetFeatureExtractor(model2)

if __name__ == '__main__':
    for x, y in testloader:
        out1 = model1(x)
        print(f'Original shape: {out1.shape}')
        out1 = model1.forward_features(x)
        print(f'Unpooled shape: {out1.shape}')
        print(out1)

        out2 = model2(x)
        print(f'Original shape: {out2.shape}')
        print(f'Unpooled shape: {activation["layer4"].shape}')
        print(activation['layer4'])

        out3 = model3(x)
        print(f'Unpooled shape:{out3.shape}')
        print(out3)

        out4 = model4(x)
        print(f'Unpooled shape:{out4.shape}')
        print(out4)
        break
     hook1.remove()

and here is the output:
Output1:

tensor([[[[0.0000, 0.0668, 0.0000,  ..., 0.0000, 0.0000, 0.0000],
          [0.0000, 0.0000, 0.3301,  ..., 0.5048, 1.2857, 0.0000],
          [0.0000, 0.0000, 0.0000,  ..., 3.4801, 1.9558, 0.0000],

           ...,
          [0.0000, 0.0000, 0.0000,  ..., 0.0000, 0.0000, 0.0000]]]],
       grad_fn=<ReluBackward0>)

Output2:

tensor([[[[0.0000, 0.0000, 0.5521,  ..., 0.5259, 0.0000, 0.0000],
          [0.0000, 0.5295, 0.0000,  ..., 1.2876, 0.4188, 0.2978],
          [0.0000, 0.0000, 0.0000,  ..., 0.0000, 0.1128, 0.3094],

          ...,
          [0.5405, 0.7594, 0.0300,  ..., 2.0316, 2.1789, 1.2677]]]])

Output3:

tensor([[[[0.0000, 0.0668, 0.0000,  ..., 0.0000, 0.0000, 0.0000],
          [0.0000, 0.0000, 0.3301,  ..., 0.5048, 1.2857, 0.0000],
          [0.0000, 0.0000, 0.0000,  ..., 3.4801, 1.9558, 0.0000],

           ...,
          [0.0000, 0.0000, 0.0000,  ..., 0.0000, 0.0000, 0.0000]]]],
       grad_fn=<ReluBackward0>)

Output4:

tensor([[[[0.0000, 0.0000, 0.5521,  ..., 0.5259, 0.0000, 0.0000],
          [0.0000, 0.5295, 0.0000,  ..., 1.2876, 0.4188, 0.2978],
          [0.0000, 0.0000, 0.0000,  ..., 0.0000, 0.1128, 0.3094],

          ...,
          [0.5405, 0.7594, 0.0300,  ..., 2.0316, 2.1789, 1.2677]]]],
       grad_fn=<ReluBackward0>)

Output 1 and 3 are similar, and 2 and 4 are similar.