How to ensemble two model in pytorch?

I want to ensemble Mode1A and Model1B, But there is a run time error
Expected 4-dimensional input for 4-dimensional weight 8 3, but got 2-dimensional input of size [1, 25088] instead
Please help me

class MyModelA(nn.Module):
    def __init__(self):
        super(MyModelA, self).__init__()
        self.layer1 = nn.Sequential(
            nn.Conv2d(in_channels=3, out_channels=8, kernel_size=3,stride=1, padding=1),
            nn.BatchNorm2d(8),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2))
        self.layer2 = nn.Sequential(
            nn.Conv2d(in_channels=8, out_channels=16, kernel_size=5, stride=1, padding=2),
            nn.BatchNorm2d(16),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2)) 
        self.layer3 = nn.Sequential(
            nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=1, padding=2),
            nn.BatchNorm2d(32),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2))     
        self.fc = nn.Linear(25088, 2)
        
    def forward(self, x):
        out = self.layer1(x)
        out = self.layer2(out)
        out = self.layer3(out)
        #out = self.layer4(out)
        out = out.reshape(out.size(0), -1)
        out = self.fc(out)
        return out
class MyModelB(nn.Module):
..
class MyEnsemble(nn.Module):
    def __init__(self, modelA, modelB):
        super(MyEnsemble, self).__init__()
        self.modelA = modelA
        self.modelB = modelB
        #self.classifier = nn.Linear(4, 2)
        
    def forward(self, x1, x2):
        head1a, head1b = self.modelA(x1)
        head2 = self.modelB(head1a)
        x = torch.cat((head1b, head2), dim=1)
        return x

# Create models and load state_dicts    
modelA = MyModelA()
modelB = MyModelB()
# Load state dicts
modelA.load_state_dict(torch.load('checkpoint1.pt'))
modelB.load_state_dict(torch.load('checkpoint2.pt'))

model = MyEnsemble(modelA, modelB)
x1, x2 = torch.randn(1,25088), torch.randn(1, 25088)
output = model(x1, x2)

MyModelA uses a nn.Conv2d layer as the first layer, which expects a 4-dimensional input with the shape [batch_size, channels, height, width].
In your code snippet, you are defining x1 as a 2-dimensional tensor, which will throw this error.
Based on the shape, it seems as if you would only want to use self.fc on this input?

@ptrblck I saw your Ensemble code and try to ensemble my model with this code. I want to combine the classifier only. Can I use this for resent model? If yes, how to use?

Would you like to combine the penultimate activations and train a new classifier on top or did I misunderstand your use case?

I have 2 similar dataset A and B . So I want to train the model in A and B and save the model in checkpoint1 and checkpoint2. Now I want to ensemble this two model. My question is how to combine this two model?

Yes…you are right.I want to combine the penultimate activations and train a new classifier.

My example should do exactly this. Would that work for you?

3 Likes

Thank you… I will try this & Let you know