How to retrain the model with saved weights and change the architecture?

Hello,
I have a use case where in after data is updated in the database I want to retrain the model by changing the out_features of Linear layer in my network.
(out_features = no.of unique words in the data).
Initially, I knew the vocabulary size and train the model by fixing the out_features at linear layer is equals to the vocabulary size. Then train and save the model weights.
Now whenever the data is updated, I need to re-train the same model with saved weights.
But whenever the data is updated the vocab_size also increases, that means I need to change the out_features of ‘Linear’ layer in my network.

print('OLD Vocabulary size:',len(tokenTOint))
print('NEW Vocabulary size:',len(token2int))
​
***OLD Vocabulary size: 323***
***NEW Vocabulary size: 401***
class WordLSTM(nn.Module):
    def __init__(self, n_hidden=256, n_layers=1, lr=0.01):
        super().__init__()
        self.n_layers  = n_layers
        self.n_hidden  = n_hidden
        self.lr        = lr
        self.emb_layer = nn.Embedding(vocab_size, 300)
        self.lstm      = nn.LSTM(input_size = 300, hidden_size = n_hidden, num_layers = n_layers, batch_first=True,)
        self.fc        = nn.Linear(n_hidden, vocab_size)      
  
    def forward(self, x, hidden):    
        embedded            = self.emb_layer(x)
        lstm_output, hidden = self.lstm(embedded, hidden)        
        out1                = lstm_output.reshape(-1, self.n_hidden)        
        out                 = self.fc(out1)
        return out, hidden    
    
    def init_hidden(self, batch_size):
        weight = next(self.parameters()).data
        hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_(),
                  weight.new(self.n_layers, batch_size, self.n_hidden).zero_())    
        return hidden
    
model = WordLSTM()
model.load_state_dict(torch.load('weights.pth'),strict = False)
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_12436\1079059162.py in <module>
     23 
     24 model = WordLSTM()
---> 25 model.load_state_dict(torch.load('weights.pth'),strict = False)
     26 model

~\.conda\envs\ElasticSearch\lib\site-packages\torch\nn\modules\module.py in load_state_dict(self, state_dict, strict)
   1603         if len(error_msgs) > 0:
   1604             raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
-> 1605                                self.__class__.__name__, "\n\t".join(error_msgs)))
   1606         return _IncompatibleKeys(missing_keys, unexpected_keys)
   1607 

RuntimeError: Error(s) in loading state_dict for WordLSTM:
	size mismatch for emb_layer.weight: copying a param with shape torch.Size([323, 300]) from checkpoint, the shape in current model is torch.Size([401, 300]).
	size mismatch for fc.weight: copying a param with shape torch.Size([323, 256]) from checkpoint, the shape in current model is torch.Size([401, 256]).
	size mismatch for fc.bias: copying a param with shape torch.Size([323]) from checkpoint, the shape in current model is torch.Size([401]).

Please help me with this.