Reuse weights from another neural network

Hi all,

I just built and trained a neural network. The neural network consists of an imput image, that is linearly scaled to a hidden layer with N hidden units. The results of the hidden layer are then once again linearly saled to the output layer. In both the hidden and output layer i’'m using ReLu activation function. However now i want to create a second network, which has a similar form as the previous but this time the hidden layer needs to consist of N+1 hidden units. Furthermore i would like to reuse the weights found from the previous network, and initialize the extra needed weight using a normal distribution. The net i have so far looks like this. Note, the idea is to extend this to a larger network, for the first initialization i want to use the xavier numbers. I hope someone can give some general advice.

class Neural_Network(nn.Module):
    def __init__(self, inpsize, outsize, hidsize, N, n, Weights1,Weights2):
        super(Neural_Network, self).__init__()
        self.inputSize = inpsize
        self.outputSize = outsize
        self.hiddenSize = hidsize

        if N == 0:     
            self.W1 = nn.Linear(self.inputSize,self.hiddenSize, bias=False)
            self.W2 = nn.Linear(self.hiddenSize,self.inputSize, bias=False)
        
            torch.nn.init.xavier_uniform_(self.W1.weight, gain=1.0)
            torch.nn.init.xavier_uniform_(self.W2.weight, gain=1.0)
        else:
            self.W1 = nn.Linear(self.inputSize,self.hiddenSize, bias=False)
            self.W2 = nn.Linear(self.hiddenSize,self.inputSize, bias=False)
            
    def forward(self, X):
        x = F.relu(self.W1(X))
        x = F.relu(self.W2(x))
        return x, self.W1.weight, self.W2.weight
      

This thred is what you maybe looking for. It shows how you can filter-out trained weights from one model to another