Dynamic addition of neurons

I am trying to add hidden units to a 3-layered neural network(input, hidden,output) dynamically as I train it. I want to keep the weights of trained part of the network as I add new hidden units.This is my code,

class my_network(torch.nn.Module):
    def __init__(self,input_dim,hidden_dim,output_dim):
        super(my_network,self).__init__()
        self.I = input_dim
        self.H = hidden_dim
        self.O = output_dim
        self.layer1 = torch.nn.Linear(input_dim,hidden_dim)
        self.layer2 = torch.nn.Linear(hidden_dim,output_dim)
        
    def add_neurons(self,no_of_neurons,flag):
        if flag == 1:
            weights = [self.layer1.weight.data,self.layer2.weight.data]
            self.layer1 = torch.nn.Linear(self.I,self.H+no_of_neurons)
            self.layer2 = torch.nn.Linear(self.H+no_of_neurons,self.O)
            self.layer1.weight.data[0:-no_of_neurons,:] = weights[0]
            self.layer2.weight.data[:,0:-no_of_neurons] = weights[1]
            self.H = self.H + no_of_neurons
        return self.layer1.weight.shape[0]
        
    def forward(self,x):
        temp = self.layer1(x)
        out = self.layer2(temp)
        return out

I have noticed that once I call “add_neurons” method, the weights stop updating(while gradients are generated). Any help would be much appreciated.

How are you running the optimizer?

After you call add_neurons, do the layers show up in your model’s state_dict / does the weight parameter show up in model.parameters() ?

It might be that you need to re-register the buffers, but I’m not entirely sure.