To complement @apaszke reply, once you have a trained model, if you want to extract the result of an intermediate layer (say fc7 after the relu), you have a couple of possibilities.
You can either reconstruct the classifier once the model was instantiated, as in the following example:
import torch
import torch.nn as nn
from torchvision import models
model = models.alexnet(pretrained=True)
# remove last fully-connected layer
new_classifier = nn.Sequential(*list(model.classifier.children())[:-1])
model.classifier = new_classifier
Or, if instead you want to extract other parts of the model, you might need to recreate the model structure, and reusing the parts of the pre-trained model in the new model.
import torch
import torch.nn as nn
from torchvision import models
original_model = models.alexnet(pretrained=True)
class AlexNetConv4(nn.Module):
def __init__(self):
super(AlexNetConv4, self).__init__()
self.features = nn.Sequential(
# stop at conv4
*list(original_model.features.children())[:-3]
)
def forward(self, x):
x = self.features(x)
return x
model = AlexNetConv4()