How can l use the pre-trained Resnet to extract feautres from my own dataset?

Hello,
l want to extract features of my own dataset from the last hidden layer of ResNet (before softmax).
l defined the following :
import torchvision.models as models

resnet152 = models.resnet152(pretrained=True,requires_grad=False)
modules=list(resnet152.children()[:-1])
resnet152=nn.Sequential(*modules)  

Is it correct ?
What’s next ?

Thank you

1 Like

Have you run the code? There are several mistakes. First, resnet152.children() is a generator, and hence is not subscriptable. Also, resnet152 doesn’t take a requires_grad argument. The following code will work:

import torch
import torch.nn as nn
import torchvision.models as models
from torch.autograd import Variable
resnet152 = models.resnet152(pretrained=True)
modules=list(resnet152.children())[:-1]
resnet152=nn.Sequential(*modules)
for p in resnet152.parameters():
    p.requires_grad = False

Of course, the outcome depends on what you want to achieve. This code returns a model consisting of all layers of resnet152 bar the last one (a fully connected layer), with fixed parameters

8 Likes

Thank you for the clarification.
Actually, my question is that l would like to use this pre-trained ResNet to extract features (from last hidden layer) from a new dataset (my own dataset). Let’s say UCF-sport, HMDB …

Hope my question is more clear.

Thank you

Yes, then this is exactly the way to go. You can see more in the tutorial:
http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html#sphx-glr-beginner-transfer-learning-tutorial-py

@jytug , l’m l don’t need neither transfer learning nor retraining the last hidden layer. l just want to use resnet to get a representation of a given input from the last hidden layer
My purpose is as follow :
Given a new input data and the pre-trained Resnet :
Get the features of that input from the last hidden layer (before softmax) of Resnet.

Hope it helps

What’s wrong with @jytug’s code? It seems correct and completely aligned with what you are trying to achieve.

2 Likes

Maybe I got your point. Once you get the iytug resnet152 variable, you have to get one image:

img = torch.Tensor(3, 224, 224).normal_() # random image
img_var = Variable(img) # assign it to a variable
features_var = resnet152(img_var) # get the output from the last hidden layer of the pretrained resnet
features = features_var.data # get the tensor out of the variable

6 Likes

Thank you a lot @Federico_Pala this is what l’m lookng for.
I have a question for you :
does the image should be of that shape and dimension (3,224,224) ?
Is it the required dimension input of the image for the ResNet ?

Thank you

Yes, the network has been trained with color images of size 224x224. If your images are for instance grayscale you have to copy the single channel three times. If your images are of a different size, you can resize them or get a 224x224 crop out of them. Keep in mind that you can get your features faster by creating a big tensor with let’s say 100 images (shape 100x3x224x224) and process them in a single round.

2 Likes

Thank you. It works. it’s a little bit slow sow l made 50 images by tensor

hi jytug
I tried to above but then i face the issues while loading the weight
Could u help in overcomin the issues
here is the link

Look at the latest topic…

after looking at a few other blogs, this seemed to work for me

new_classifier = nn.Sequential(*list(loaded_model.children())[:-1])
model = new_classifier
outputs = model(input_image.unsqueeze_(0).cuda())
outputs = outputs.view(-1)

But in this feature extractor tutorial why do you need to reinitialize last layer when for feature extractor we don’t need it in the first place?

I tried using the following model for feature extraction instead, which would maintain the original layer connection from the original Resnet, and got slightly better training accuracy, but results were almost the same:

import torch.nn as nn
import torchvision.models as models

resnet = models.resnet152(pretrained=True)
num_ftrs_resnet = resnet.fc.in_features
for param in resnet.parameters():
    param.requires_grad = False
resnet.fc = nn.Flatten()

Hi,I have a question regarding image size,it seems that the resnet family can only work on 224x224 images,are there other pretrained models which can work on larger image dimensions such as 512x512 or even larger 1024.