Resnet.features not work

Hi,

I want to extract features from the pretrained CNN.
Until recently, the following code worked, but now I get an error.

import torchvision.models as models

resnet = models.resnet152(pretrained=True)

resnet.features    # Error " 'ResNet' object has no attribute 'features' " occurred.

How can i fix this problem?

Thanks.

2 Likes

Hi,

resnet has no module named features. (I guess you followed examples of using pretrained VGG network.)

Are you trying to use only a few layers from resnet? If you explain what you are trying to do, I will try yo help.

@SelvamArul

I want to extract feature vectors from the last hidden layer (before the softmax layer).

I don’t want replace the fc layer like resnet.fc = nn.Linear(resnet.fc.in_features, embed_size). I don’t want train the resnet fc layer, just want to extract the feature vectors from it.

Thanks

Currently I am solving the problem like this.

resnet = models.resnet152(pretrained=True)
modules = list(resnet.children())[:-1]      # delete the last fc layer.
resnet = nn.Sequential(*modules)
2 Likes

Just as a side note, if you just want to extract the features do not forget to set parameters.requires_grad = False

2 Likes

Hi,
Can you explained it more in detail, where should I set parameters.requires_grad = False, i.e
what is the full expression?
thank you

When you use pretrained models for finetuning, you don’t want to backpropagate though the pretrained model. i.e. You only update the weights of the new layers added on top the pretrained model. To a achieve this, you should iterate through the parameters of the pretrained model and set requires_grad = False. For example,

resnet = models.resnet152(pretrained=True)
modules = list(resnet.children())[:-1] # delete the last fc layer.
resnet = nn.Sequential(*modules)
### Now set requires_grad to false
for param in resnet.parameters():
param.requires_grad = False

4 Likes

Did you solve this problem, how did you extract features

I’m a newbie myself. I love PyTorch’s architecture, but I’m tempted to go back to TensorFlow because the documentation and examples are few and fragmentary. For more fragments below, first we create a mean and standard deviation of correct dimenstions

resnet_mean = torch.from_numpy(
    np.array([0.485, 0.456, 0.406], dtype=np.float32))
resnet_std = torch.from_numpy(
    np.array([0.229, 0.224, 0.225], dtype=np.float32))
# Oh, for below, we need to match the dimensions. If you aren't unsqueezing, you're not pytorching
mean = resnet_mean.unsqueeze(0).unsqueeze(-1) #add a dimenstion for batch and (width*height)
std = resnet_std.unsqueeze(0).unsqueeze(-1) #add a dimenstion for batch and (width*height)

OK, now it gets slightly more complex. Images in pytorch tensors are #channels x height x width but mostly they come in collections or “batches” which means batch x channels x height x width. Let’s assume, via your wrestling with your datalaoder that you have the later. Then, lets reshap that tensor (OK, some programmer wants to call reshape that we all know from numpy as “view” oi vey): Assume we have a tensor of batch x chan x height x width and again, this would presumably be in some data loader normalization function
… tensor enters normalization function …
h, w = tensor.shape[2:]
norm_tensor = tensor.view(tensor.shape[0], tensor.shape[1], -1) #batch x channel x (height*width)
norm_tensor = norm_tensor - mean # Make image mean zero
norm_tensor = norm_tensor / std # Make std = 1
norm_tensor = norm_tensor.view(tensor.shape[0], tensor.shape[1], h, w) #back to batch x chan x w x h

All set, the tensors are now normalized

Now, joining after Arul’s code … presumably this would be the data loader loop that wraps the incoming tensors normalized as above:
output = resnet(norm_tensor)

OK, you have output features from your headless resnet. I think what you really wanted is not the features, but some other trainable head you put on top of the headless resnet … currently grinding through that. gist is to create a model, “foo” then.
Right after Arul’s code, do

foo_resnet = nn.Sequential(resnet(), foo())

And of course, my line above this then becomes:

output = foo_resnet(norm_tensor) #your mileage may vary unchecked code

What is alternative to renset50.features?

I ran into this issue while going through: https://towardsdatascience.com/visualizing-convolution-neural-networks-using-pytorch-3dfa8443e74e since I was using ResNet50 instead of the model from the article.

The solution I came up with was:

features = []
# model._modules came from: https://github.com/utkuozbulak/pytorch-cnn-visualizations/issues/50#issuecomment-531757757
for key,value in model._modules.items():
    features.append(value)

where model is the model I created and then instead of doing model.features I just reference it as features since it’s a separate object.

2 Likes