Asking for help if we have a means to extract the same features present the forward_features
functionality of timm
for models created directly using the models
subpackage or we have to use hooks? E.g. To clarify here is a simple scenario, we have a model created using timm
, then we can extract the features by calling the forward_features
.
import torch
import timm
m = timm.create_model('resnet50', pretrained=True)
o = m(torch.randn(2, 3, 299, 299))
print(f'Original shape: {o.shape}')
o = m.forward_features(torch.randn(2, 3, 299, 299))
print(f'Unpooled shape: {o.shape}')
print(o)
Output
Original shape: torch.Size([2, 1000])
Unpooled shape: torch.Size([2, 2048, 10, 10])
tensor([[[[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 1.9479, 1.7675],
...,
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]]]],
grad_fn=<ReluBackward0>)
I want to get the same feature from a pretrained model created directly from models
subpackage , I used forward_hook
, but the outputs are not the same.
from torchvision import models
model = models.resnet50(pretrained=True)
def getActivation(name):
# the hook signature
def hook(model, input, output):
activation[name] = output.detach()
return hook
# register forward hooks
h1 = model.layer4.register_forward_hook(getActivation('layer4'))
# forward pass -- getting the outputs
out = model(torch.randn(2, 3, 299, 299, requires_grad = True))
print(f'Original shape: {out.shape}')
print(f'Unpooled shape: {activation["layer4"].shape}')
Output:
Original shape: torch.Size([2, 1000])
Unpooled shape: torch.Size([2, 2048, 10, 10])
tensor([[[[0.7899, 1.9169, 0.5745, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 1.4325, 0.4367, ..., 0.0000, 0.0000, 0.0000],
[1.5453, 1.6186, 0.0000, ..., 0.0000, 0.0000, 0.0000],
...,
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0765, 0.2392]]]])
Thank you in advance for your help.