Pre-Trained Model Feature Fusion

Hi,
I have two different image data-set but related to a same class. I am going to use pre-trained net like alexnet for both to detect features and then, concatenate those features into a classifier (and optimize this classifier not the whole models). I would appreciate if you can help me within this context. Here is the pseudo code:

  1. Alexnet(Image I) --> Feature I
  2. Alexnet(Image II) --> Feature II
  3. Concatenate (Feature I, Feature II) --> Feature
  4. Feature --> Classifier
  5. Optimize Step 4

Thanks in advance,

This code snippet could be a good starter for your use case.
Let me know, if that would work for you.

Many thanks for your prompt action.
As I described, I should transfer one image to first model and second image to another model (while your code transfers an image to both models). The main problem here is that how data loader shuffles same on both dataset and how can do the feature fusion.
Thanks in advance,

In that case you could pass both image tensors to your forward method as:

    def forward(self, x1, x2):
        x1 = self.modelA(x1)  
        x1 = x1.view(x1.size(0), -1)
        x2 = self.modelB(x2)
        x2 = x2.view(x2.size(0), -1)
        x = torch.cat((x1, x2), dim=1)
        
        x = self.classifier(F.relu(x))
        return x

The output features will be concatenated in the torch.cat call.
Would that work for your “fusion” or are you thinking about another method?

Could you explain your concerns about the DataLoader?
How are you defining both Datasets currently and how would you like to sample each image?

If you want to use the same index to get the samples from both Datasets, I think the easiest approach would be to wrap these Datasets in another one and just index them:

class MyDataset(Dataset):
    def __init__(self, ds1, ds2):
        self.ds1 = ds1
        self.ds2 = ds2
        
    def __getitem__(self, index):
        x1, y1 = self.ds1[index]
        x2, y2 = self,ds2[index]
        return x1, y1, x2 ,y2
    
    def __len__(self):
        return len(self.ds1) # assume both datasets have same length
3 Likes

Could you please advice me??

self.features_i and self.features_ii will point to the same module and will thus reuse the parameters.
If you want to create two separate modules, you could use deepcopy as:

self.features_ii = deepcopy(model.features)

Could you post the code here, please?

The code looks alright from what I could see.
PS: it’s generally better to post code snippets directly by wrapping them in three backticks ```. :wink:

2 Likes

Hi,
I have three different image data-sets (say X, Y and Z) but related to a same class. I am going to use pre-trained net like resnet50 and resnet18 for both to detect features and then, concatenate those features into a classifier. Here I want two classifiers to predict output based on 1st and 2nd dataset, based on 1st and 3rd dataset. I have reused your code of ensemble.
I would appreciate if you can help me within this context. Please suggest if there are any corrections. Thanks in advance,

class MyEnsemble(nn.Module):
    def __init__(self, modelA, modelB, nb_classes=2):
        super(MyEnsemble, self).__init__()
        self.modelA = modelA
        self.modelB = modelB
        
        # Remove last linear layer
        self.modelA.fc = nn.Identity()
        self.modelB.fc = nn.Identity()
        
        # Create new classifier
        self.classifier = nn.Linear( 2048+ 512, nb_classes)
        
    def forward(self, x1, x2):
        x1 = self.modelA(x1)  
        x1 = x1.view(x1.size(0), -1)
        x2 = self.modelB(x2)
        x2 = x2.view(x2.size(0), -1)
        x = torch.cat((x1, x2), dim=1)
        
        x = self.classifier(F.relu(x))
        return x

class Conv(nn.Module):
    def __init__(self):
        super(Conv, self).__init__()
        # Train your separate models
        # ...
        # We use pretrained torchvision models here
        self.modelA = models.resnet50(pretrained=True)       #model to extract common feature
        self.modelB = models.resnet18(pretrained=True)
        self.modelA_1 = copy(self.modelA)
        self.modelC = deepcopy(self.modelB)

        # Freeze these models
        for param in self.modelA.parameters():
            param.requires_grad_(False)

        for param in self.modelB.parameters():
            param.requires_grad_(False)

        for param in self.modelC.parameters():
            param.requires_grad_(False)

        self.model1 = MyEnsemble(self.modelA, self.modelB)
        self.model2 = MyEnsemble(self.modelA_1, self.modelC)

    def forward(self, d1, d2, tgt):
        # Create ensemble model
        output1 = self.model1(d1, tgt)
        output2 = self.model2(d2, tgt)
        return output1, output2


x = torch.randn(1, 3, 224, 224)
y = torch.randn(1, 3, 224, 224)
z = torch.randn(1, 3, 224, 224)
model = Conv()
out1, out2 = model(x, y, z)
1 Like

hey, how to fuse two different CNN models in googlecolab using pytorch? Also want classification report results?