Merge two pretrained weights from 2 neural networks

I am not sure how to make this happen.

For example, I have 2 neural networks. Each neural network’s first 3 layers are identical.

If I want to merge the pretrained weights/biases of these 2 sets of 3 layers together. How may I do that?

I am thinking the following logic:

  • (layer 1 of 1st NN + layer 1 of 2nd NN) / 2
  • (layer 2 of 1st NN + layer 2 of 2nd NN) / 2
  • (layer 3 of 1st NN + layer 3 of 2nd NN) / 2

Does it make any sense?

In my opinion, if the models are independently trained, it is not meaningful to combine the neural networks this way, as there are no correlation / common reference points between them. i.e., it’s not guaranteed that the 1st layer in 1st NN learns similar features in the same order as the 1st layer in 2nd NN.

It is technically possible to do it as you know and may possibly give weights better than random initialization.