How can l use the pre-trained Resnet to extract feautres from my own dataset?


#1

Hello,
l want to extract features of my own dataset from the last hidden layer of ResNet (before softmax).
l defined the following :
import torchvision.models as models

resnet152 = models.resnet152(pretrained=True,requires_grad=False)
modules=list(resnet152.children()[:-1])
resnet152=nn.Sequential(*modules)  

Is it correct ?
What’s next ?

Thank you


(Filip Binkiewicz) #2

Have you run the code? There are several mistakes. First, resnet152.children() is a generator, and hence is not subscriptable. Also, resnet152 doesn’t take a requires_grad argument. The following code will work:

import torch
import torch.nn as nn
import torchvision.models as models
from torch.autograd import Variable
resnet152 = models.resnet152(pretrained=True)
modules=list(resnet152.children())[:-1]
resnet152=nn.Sequential(*modules)
for p in resnet152.parameters():
    p.requires_grad = False

Of course, the outcome depends on what you want to achieve. This code returns a model consisting of all layers of resnet152 bar the last one (a fully connected layer), with fixed parameters


#3

Thank you for the clarification.
Actually, my question is that l would like to use this pre-trained ResNet to extract features (from last hidden layer) from a new dataset (my own dataset). Let’s say UCF-sport, HMDB …

Hope my question is more clear.

Thank you


(Filip Binkiewicz) #4

Yes, then this is exactly the way to go. You can see more in the tutorial:
http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html#sphx-glr-beginner-transfer-learning-tutorial-py


#6

@jytug , l’m l don’t need neither transfer learning nor retraining the last hidden layer. l just want to use resnet to get a representation of a given input from the last hidden layer
My purpose is as follow :
Given a new input data and the pre-trained Resnet :
Get the features of that input from the last hidden layer (before softmax) of Resnet.

Hope it helps


(Simon Wang) #7

What’s wrong with @jytug’s code? It seems correct and completely aligned with what you are trying to achieve.


(Federico Pala) #8

Maybe I got your point. Once you get the iytug resnet152 variable, you have to get one image:

img = torch.Tensor(3, 224, 224).normal_() # random image
img_var = Variable(img) # assign it to a variable
features_var = resnet152(img_var) # get the output from the last hidden layer of the pretrained resnet
features = features_var.data # get the tensor out of the variable


#9

Thank you a lot @Federico_Pala this is what l’m lookng for.
I have a question for you :
does the image should be of that shape and dimension (3,224,224) ?
Is it the required dimension input of the image for the ResNet ?

Thank you


(Federico Pala) #10

Yes, the network has been trained with color images of size 224x224. If your images are for instance grayscale you have to copy the single channel three times. If your images are of a different size, you can resize them or get a 224x224 crop out of them. Keep in mind that you can get your features faster by creating a big tensor with let’s say 100 images (shape 100x3x224x224) and process them in a single round.


#11

Thank you. It works. it’s a little bit slow sow l made 50 images by tensor


(Jaideep Valani) #13

hi jytug
I tried to above but then i face the issues while loading the weight
Could u help in overcomin the issues
here is the link

Look at the latest topic…