Single Encoder multiple decoder

Hi,

I want to integrate multiple pre-trained models(model1, model2) together, both have encoder and decoder architecture, I want to keep the encoder of model1 common for both of models.

With huge codes and pre-trained models at my disposal, what is the best way to extract decoder layers from the model2 and integrate it with encoder of model1 such that all 3 can co-exist?

I’m not sure if I understand the use case correctly, but if you want to use specific submodules from the pretrained models, you could use:

data = torch.randn(...)
out = model1.encoder(data)
out = model2.decoder(out)

assuming that model1 and model2 are written in a way to easily access the encoder and decoder submodules.

Hi Peter,

Thanks for your response.

The two models are https://github.com/intel-isl/MiDaS and https://github.com/NVlabs/planercnn, here I am getting no results by doing model.encoder, so I guess they are not defined properly.

I am trying to extract layers using model.children() and slice the list to separate the encoder and decoder part. I am planning to add weights in the layers by using state dict. Is this the right way to extract the layers and transfer learn. Or I have to dig deep into underlying code and then generate the layers?

If you are planning to extract the encoder/decoder layers via model.children() and use them in e.g. separate nn.Sequential modules, note that you would have to make sure these submodules can be executed sequentially without any functional calls in the original forward.

I would generally recommend to look into the model definition and make sure the forward is still executed as expected.