Transfer Learning between fully-connected neural networks of different architectures

Hi,

Say you have a fully connected neural network (FCNN) with 2 hidden layers of 512 neurons each. The input to this FCNN comes from a convolutional neural network (CNN) which, flatten, is a 1024-dimensional column vector. That means the first layer of the FCNN has weights with shape (1024, 512) and biases (512).

Now, I’d like to use the weights of this trained FCNN on another FCNN, which although has the same architecture (512, 512), it also has a higher dimensional input (2048). That means there is a mismatch between the weights of the first layer, the pretrained being (1024, 512) and the new (2048, 512).

An intuitive way to ‘solve’ this mismatch is to simply broadcast the (1024, 512) matrix into a (2048, 512)
by simply mirroring the weights. How can someone go about doing this in PyTorch?

Thank you!

Neo