How to extract features of an image from a trained model

Thanks for the quick response,
I get the gist of how to do it.

Again,
inception = torchvision.models['inception_v3']
inception does not have attributes ‘Conv2d_1a_3x3’

How do I access them using inception?

It should have these modules, as shown in the link in my previous message.
If not, you are probably using a DataParallel on top of inception, so you first need to get the module back, by using inception.module.Conv2_1a_3x3. Just to be sure, print your model and inspect the names of the modules that are printed

Got it to work. Thanks :slight_smile:

so how to perform the backward process of a model with multiple outputs? Is it like this?:

class Net(nn.Module):
    def __init__(self):
    self.conv1 = nn.Conv2d(1, 1, 3)
    self.conv2 = nn.Conv2d(1, 1, 3)
    self.conv3 = nn.Conv2d(1, 1, 3)

def forward(self, x):
    out1 = F.relu(self.conv1(x))
    out2 = F.relu(self.conv2(out1))
    out3 = F.relu(self.conv3(out2))
    return out1, out2, out3

o1,o2,o3 = model.forward(input)
loss1 =criterion(o1,target1)
loss2 =criterion(o2,target2)
loss3 =criterion(o3,target3)
loss1.backward()
loss2.backward()
loss3.backward()

??
If it is not like this, how to make this work?

You can use autograd.backward, or just sum the losses and backprop the summed losses

loss = loss1 + loss2 + loss3
loss.backward()
4 Likes

I wrote a demo:

criterion = nn.MSELoss()
x = Variable(torch.randn(1,100), requires_grad=True)
y = Variable(torch.randn(1,40))

class ToyModel(nn.Module):
    def __init__(self):
        super(ToyModel, self).__init__()
        self.linear1 = nn.Linear(100,50)
        self.linear2 = nn.Linear(50,40)
        self.linear3 = nn.Linear(100,40)

    def forward(self, x):
        out1 = self.linear1(x)
        out2 = self.linear2(out1)
        out3 = self.linear3(x)
        return out2,out3

model=ToyModel()
out2,out3 = model.forward(x)
print out3
loss1= criterion(out2,y)
loss2 = criterion(out3,y)
#print out2.grad

torch.autograd.backward(x, [grad1, grad2])

But how to get grad1 and grad2 ?
print out2.grad just gives me 'None’
Is there any bug in my code? Thanks!

But how to get grad1 and grad2 ?

you don’t have to, that’s why Pytorch is amazing:

model=ToyModel()
out2,out3 = model.forward(x)
loss1= criterion(out2,y)
loss2 = criterion(out3,y)
loss = loss1 + loss2
loss.backward()

If I modify it to:
loss = loss1 + 0.8 * loss2
then it is a weighted loss?

Yes, it is weighted. You can mix the losses as you want.

1 Like

what if I really need to get grad1 and grad2?[quote=“brisker, post:55, topic:119”]
criterion = nn.MSELoss()
x = Variable(torch.randn(1,100), requires_grad=True)
y = Variable(torch.randn(1,40))

class ToyModel(nn.Module):
def init(self):
super(ToyModel, self).init()
self.linear1 = nn.Linear(100,50)
self.linear2 = nn.Linear(50,40)
self.linear3 = nn.Linear(100,40)

def forward(self, x):
    out1 = self.linear1(x)
    out2 = self.linear2(out1)
    out3 = self.linear3(x)
    return out2,out3

model=ToyModel()
out2,out3 = model.forward(x)
print out3
loss1= criterion(out2,y)
loss2 = criterion(out3,y)
#print out2.grad

torch.autograd.backward(x, [grad1, grad2])
[/quote]

Before performing backpropagation you won’t have gradients as they are initialized when they are actually computed. (lazy initialization)

import torch
import torch.nn as nn
from torch.autograd import Variable

model = nn.Linear(5, 7)
x = Variable(torch.randn(10, 5))
y = model(x)

print(model.weight.grad) # None

y.backward(torch.randn(y.size()))

print(model.weight.grad) # Prints a 7x5 tensor

what about the model as described here?:[quote=“brisker, post:55, topic:119”]
class ToyModel(nn.Module):
def init(self):
super(ToyModel, self).init()
self.linear1 = nn.Linear(100,50)
self.linear2 = nn.Linear(50,40)
self.linear3 = nn.Linear(100,40)

def forward(self, x):
    out1 = self.linear1(x)
    out2 = self.linear2(out1)
    out3 = self.linear3(x)
    return out2,out3

[/quote]

you need to use hooks if you want to inspect gradients of intermediate variables.
Refer to the discussion here: Why cant I see .grad of an intermediate variable?

In your case you need to attach a hook to out2 and out3, which returns the ‘grad’

Hey guys, pardon my silly question. Of what use is the FC layer? How do you generalize to localization/detection of objects using the last fclayer features?

In this line new_classifier = nn.Sequential(*list(model.classifier.children())[:-1]) from @fmassa 's post, I wonder if it was meant to read

  new_classifier = nn.Sequential(*list(model.children())[:-1])

I checked the methods of the resnet18 model and it seemed to have

 |  children(self)
 |      Returns an iterator over immediate children modules.

as a direct method. Am I wrong?

resnet18 doesn’t have a classifier field, and the solution I gave was for vgg.

2 Likes

@fmassa In order to get the fc7 features of a resnet, do I need to write a class like you did with Inception, or is there a more straightforward way of doing so?

Just using:

new_classifier = nn.Sequential(*list(model.children())[:-1])
model = new_classifier

seems to work. Right?

Also, is there a way of getting both the fc7 features and the results of the softmax?

Hey @apaszke, @fmassa, if I download a resnet18 without weights for example,

resnet = torchvision.models.resnet18(pretrained=False)

Now, say I want to change the shape of the last layer to 512->2.
Which is the appropriate way to reshape the last fc layer? This:

resnet.add_module('9', nn.Linear(512, 2))?

@lakehanne

resnet.fc = nn.Linear(512, 2)
2 Likes

can you please tell me how are you loading model in different file??