How to drop layers from a custom PyTorch model

I have a custom model that I’m using, from this git repo

Here is the basic architecture (there are some additional conv layers):

  (lconv6aa): Sequential(
    (0): Conv2d(128, 196, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
    (1): LeakyReLU(negative_slope=0.1)
  )
  (lconv6a): Sequential(
    (0): Conv2d(196, 196, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): LeakyReLU(negative_slope=0.1)
  )
  (lconv6b): Sequential(
    (0): Conv2d(196, 196, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): LeakyReLU(negative_slope=0.1)
  )
  (corr): Correlation()
  (leakyRELU): LeakyReLU(negative_slope=0.1)
  (conv6_0): Sequential(
    (0): Conv2d(81, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): LeakyReLU(negative_slope=0.1)
  )
  (conv6_1): Sequential(
    (0): Conv2d(209, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): LeakyReLU(negative_slope=0.1)
  )
  (conv6_2): Sequential(
    (0): Conv2d(337, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): LeakyReLU(negative_slope=0.1)
  )
  (conv6_3): Sequential(
    (0): Conv2d(433, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): LeakyReLU(negative_slope=0.1)
  )
  (conv6_4): Sequential(
    (0): Conv2d(497, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): LeakyReLU(negative_slope=0.1)
  )
  (fc1): Linear(in_features=529, out_features=512, bias=True)
  (fc1_trasl): Linear(in_features=512, out_features=256, bias=True)
  (fc1_rot): Linear(in_features=512, out_features=256, bias=True)
  (fc2_trasl): Linear(in_features=256, out_features=3, bias=True)
  (fc2_rot): Linear(in_features=256, out_features=4, bias=True)
  (dropout): Dropout(p=0.0)

The model is running with pre-trained weights, and I want to replace these last fc layers with my own. Is there some way I can drop those layers, and replace it with my own custom ones? I’m a beginner in pytorch and wasn’t able to figure this out based on the posts that I found here. Any help would be appreciated

Yes, you can directly assign new modules to the model via e.g.:

model.fc1 = nn.Linear(...)

to replace the linear layer assigned to fc1.

1 Like

Thank you! A quick followup - on PyTorch versions before 1.1.0, is there an alternative for torch.nn.Identity. I want to replace some of these fc layers with dummies if possible. Thank you! I am constrained to using an older version of torch (1.0.2)

You could write a custom layer, which would also just forward the input as:

class Identity(nn.Module):
    def __init__(self):
        super().__init__()
        
    def forward(self, x):
        return x