Load and use only part of model

I have a pretrained model (I’ll use a sample of an MLP shown below). I’m trying to load this pretrained model but use only layers 2 to 6 (i.e from max_pool to fc3). how can I go about that please?

pretrained_mlp = mlp.load_state_dict(torch.load(model_path))
new_model = #only layers 2 to 6 go pretrained_mlp
MLP(
  (conv2d): Conv2d(256, 16, kernel_size=(3, 3), stride=(1, 1))
  (max_pool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (fc1): Linear(in_features=29344, out_features=64, bias=True)
  (relu): ReLU()
  (fc2): Linear(in_features=64, out_features=64, bias=True)
  (fc3): Linear(in_features=64, out_features=2, bias=True)
  (mapping): Sequential(
    (0): Linear(in_features=3, out_features=16, bias=True)
    (1): ReLU()
    (2): Linear(in_features=16, out_features=16, bias=True)
    (3): ReLU()
    (4): Linear(in_features=16, out_features=16, bias=True)
    (5): ReLU()
    (6): Linear(in_features=16, out_features=16, bias=True)
  )
)

Assuming you want to use these layers from the pre-trained model:

  (max_pool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (fc1): Linear(in_features=29344, out_features=64, bias=True)
  (relu): ReLU()
  (fc2): Linear(in_features=64, out_features=64, bias=True)
  (fc3): Linear(in_features=64, out_features=2, bias=True)

I think the cleanest approach would be to write a custom nn.Module and pass these layers to this model or to re-wrap them into e.g. a new nn.Sequential container. Note that the latter approach would need some additional changes as the functional API calls from the original model’s forward method would be missing. In particular I expect a flattening operation in the forward was used (e.g. via x = x.view(x.size(0), -1)) between the MaxPool2d layer and the first Linear layer.

How would I go about wrapping it into a new nn.sequential container? Because I tried doing below but I get an error

new_model = torch.nn.Sequential(*(list(pretrained_mlp.children()))[2:7])

You would need to check the forward implementation and could then explicitly create the nn.Sequential container e.g. via:

model = nn.Sequential(
    model.max_pool,
    nn.Flatten(),
    model.fc1,
    model.relu,
    model.fc2,
    model.relu,
    model.fc3
)

Your approach should also work assuming that children() returns all modules, which are indeed applied sequentially in your original forward method.
As previously mentioned, at least the flattening operation seems to be missing, so I guess this is what might be causing the error you are seeing.