I am trying to use PyTorch to get the outputs from intermediate layers of AlexNet/VGG:
alexnet_model = models.alexnet(pretrained=True)
modules = list((alexnet_model).children())[:-1*int(depth)]
alexnet_model = nn.Sequential(*modules)
What is odd is that I get the same output values (i.e. the same exact model) when depth=1
and depth=2
, and then the same output values for depth=3
all the way to depth=10
. I observe this same phenomenon for VGG too. However, I don’t observe this for ResNet, which gives me different output values (i.e. different models) for all depths [1, 10].
Any ideas about what might be going on?
list((alexnet_model).children())
will return a list of length 3 containing the first nn.Sequential
block for the feature extraction, the nn.AdaptiveAvgPool2d
layer, and the last nn.Sequential
block used as the classifier.
If you use depth>=3
, modules
will be empty and you will just get back your input tensor.
1 Like
Thanks for your response! How do I get outputs of the layers within the sequential blocks? And how is this working for the ResNet architecture?
You could use forward hook as described in this example.
1 Like
Thanks for the reference! Apparently, the layers within the sequential blocks (of AlexNet, VGG, etc.) don’t have names associated with them (e.g. ‘self.fc2’); how could I extract outputs from certain layers within the last sequential block using your function?
You can access a module inside an nn.Sequential
block by indexing it:
model = nn.Sequential(
nn.Conv2d(3, 6, 3, 1, 1),
nn.ReLU(),
nn.Conv2d(6, 1, 3, 1, 1)
)
# get second conv layer
c = model[2]
1 Like