Concat weights from two pre-trained models

Hi.

I would like to concat a high-level feature from two own pre-trained U-Net models.

Let:

model1 = torch.load("UNetmodel1.pt")
model2 = torch.load("UNetmodel2.pt")

Model1 is pre-trained on heart ultrasound dataset and model2 is pre-trained on abdomen ultrasound dataset. Now, I would like to extract high-level features from two pre-trained segmentation network U-Net. Then, I would like to classify these features by AlexNet or other state-of-the-art classification networks. On the output, I need two classes: the heart and abdomen.
How to do it?

Based on your description of the use case I don’t think you would like to concatenate the parameters of the models, but are looking for something like a model ensemble.
In the linked code snippet you could replace the two submodules with your pretrained UNets and add Alexnet as the classifier.

@ptrblck Thank you very much. I made it yesterday, based on the above example.
And a little question: If I will make a classification task, based on 2-3 pre-trained segmentation network, should I train again a whole model from scratch?

I think you should at least compare retraining the submodules and see, if it would improve your use case.