Can i remove a layer from a pre-trained model while loading the model weights?

Hi,
I am working on a problem that requires pre-training a first model at the beginning and then using this pre-trained model and fine-tuning it along with a second model. When training the first model, it requires a classification layer in order to compute a loss for it. However, I do not need my classification layer when using the pretrained model along with my second model. I only need the output (which in my case is the hidden state of an LSTM). At the second stage, when loading the pre-trained model, if i remove the classification layer, then does PyTorch automatically ignore the weights for that classification layer and keep the rest when fine-tuning? Are the weights for each sub-module separately saved and then used as needed? This is a snippet of the code:

class LSTMD(nn.Module):
    
    def __init__(self, input_size, hidden_size, num_classes):
        super(LSTMD, self).__init__()
        self.hidde_size = hidden_size
        self.embedding = nn.Embedding(num_classes, hidden_size)
        self.lstm = nn.LSTMCell(input_size, hidden_size)
        self.classification = nn.Linear(hidden_size, num_classes)
        
    def forward(self, x):
        h,c = [torch.zeros(1, self.hidden_size), torch.zeros(1, self.hidden_size)]
        x = self.embedding(x)
        h,c = self.lstm(x, (h,c))
        out = self.classification(h)
        return out

model = LSTMD(512,512,1000)
state = {'model': model, 'model_optimizer': model_optimizer}

torch.save(state, 'saved.pth')

After that, I only want to load my pre-trained model but only use h from the LSTM output and discard the classification layer when fine-tuning the model.

class New(nn.Module):
    
    def __init__(self, checkpoint):
        super(New, self).__init__()
        checkpoint = torch.load(checkpoint)
        old_model = checkpoint['model']
        # I removed out = self.classification(h) from the old model
        modules = list(old_model.children())[:-1]
        self.new_model = nn.Sequential(*modules)

    def forward(self, x):

        out = self.new_model(x)  
        return out

But then I get the error: forward() takes _ positional arguments but _ were given
_ depends on the arguments in my case.

However, when I remove the
self.classification = nn.Linear(hidden_size, num_classes)
and
out = self.classification(h)
from the first model, and load the checkpoint, I get no error. But i’m afraid this is not the correct way as later the model will have problem fine-tuning? The problem mainly happens when I call the forward function on the new model created by nn.Sequential. Even if i keep the layers by doing:

 modules = list(old_model.children())
  self.new_model = nn.Sequential(*modules)

And then run the forward function, it dosen’t work. Something is happening is the sequential operation .Any clue what may be the problem
Thanks!

Could you try to save the state_dict instead of the model and optimizer directly?
Then while restoring, try to use strict=False in .load_state_dict.

3 Likes

Hi @ptrblck. This worked perfectly. Apparently there were some extra keys that I didn’t take note of, so PyTorch can’t match them. However, I thought of a cleaner solution. Can’t I just freeze the classification layer when fine-tuning? It is the last layer so backprop wouldn’t be cut in the middle:

for p in model.classification.parameters():
    p.requires_grad = False

It depends on your use case and if you need the output of the last linear layer or not.
Could you explain your use case a bit?

Hi @ptrblck. Thanks for your reply!
I don’t really need the output of the classification layer in the fine-tuning stage. I only need it when pre-training the first model for supervision (loss calculation) purposes. After the first model has been trained, I only want to use the output of the LSTM and not the classification layer, but I need to fine-tune it as well (fine-tune everything except the classification layer). My case is similar to when fine-tuning a pre-trained CNN. We remove the classification layer and fine-tune the other CNN layers.

I have a similar doubt. Is it possible to remove intermediate layers from pretrained models?

If the input and output shape is the same, you could replace these layers with an nn.Identity module.

Thanks for your response!
I was trying to find if a model has a convolution layer and if present, add another custom layer infront of it.
Pseudo Code
for layer in self.model.children():
if isinstance(layer, nn.Conv2d):
#add a custom layer infront of it

Is this possible to achieve in pytorch?

I think there is not an easy solution to do it automatically. :confused:
You could abuse the __getattr__ and __setattr__ methods and manipulate the modules with it.
E.g. here is a working (but not recommended) code snippet to add nn.Identity() modules in fron of all conv layers in a resnet:

model = models.resnet18()

# Get all layer names you would like to change
layers_to_change = []
for name, module in model.named_modules():
    if isinstance(module, nn.Conv2d):
        print('found ', name)
        layers_to_change.append(name)

# Iterate all layers to change
for layer_name in layers_to_change:
    # Check if name is nested
    *parent, child = layer_name.split('.')
    # Nested
    if len(parent) > 0:
        # Get parent modules
        m = model.__getattr__(parent[0])
        for p in parent[1:]:    
            m = m.__getattr__(p)
        # Get the conv layer
        orig_layer = m.__getattr__(child)
    else:
        m = model.__getattr__(child)
        orig_layer = copy.deepcopy(m) # deepcopy, otherwise you'll get an infinite recusrsion
    # Add your layer here
    m.__setattr__(child, nn.Sequential(
        nn.Identity(), orig_layer))

print(model)

While this seem to work for this model, I would recommend to create a new model class by deriving from the desired model as the base class and manipulate the layers inside the __init__ method.

2 Likes

Thanks a lot :slight_smile:
I had another doubt. Would really appreciate the help
If my last layer has one output, i.e., my target is to train a model to get only one class(example: cat)…which loss function would be adequate for it? I was using MSE but it doesn’t converge. It goes past the target value if more data is used.

I assume you are still dealing with two classes, i.e. cat vs. not-cat?
If you are dealing with a single class, you cannot learn anything.

On the other hand, for a binary classification (cat vs. not-cat), you could use nn.BCEWithLogitsLoss.

If I have nn.LSTM(input_size, hidden_size, num_layers=2,batch_first=True, bidirectional=True) in my network, is there anyway to remove the second layer of lstm after training?

Based on this great visualization by @vdw, you could probably try to set the weight and bias is the corresponding layer to ones and zero, and index the output state accordingly.
Note that you will find the corresponding layer parameters e.g. by model.weight__hh_l0, where l0 gives you the layer.

Thank you for your response! Do you mean lstm.bias_hh_l1, lstm.bias_ih_l1, lstm.weight_hh_l1, lstm.weight_ih_l1, plus the reverse version of these? Because I am trying to remove the second layer of lstm in the state dictionary of the model? And can you please show it with an example for more clarification?

My problem is that I do not see a linear relationship between these weights and biases and the inputs and outputs of an lstm unit. So, I am confused which elements should be zero or one. Is it at all possible to do so? In case it is possible, I would be grateful if you can help me with this. Otherwise, what do you think is the best way to approach this problem?

Thanks for your response!
The task in hand is to train the model to have a variance of one. So, I have a target class one, and I am training the model through MSELoss and Stochastic Gradient descent. The issue is I am not able to train the model. It never converges.

I’m always learning from you. Thank you very much!

1 Like