Ensemble of models with several dataloader

I have searched for ensemble method in PyTorch, but fail to get what I want.

I have several models already trained, but these models have different transform methods. I want to use some ensemble method to combine these models. One method is to save all results into files for each model, and load the file for training. But this might be not very elegant.

What I desire is to get outputs given by each model in my code directly (without pre-saving), but I can only get outputs of a batch for each model, as each model has its own dataloader (because they have different transforms). I don’t quite know how can I combine the outputs of these models.

The code looks something like this:

def evaluate(model, device, dataloader):
    ...
    for inputs, labels in dataloader:
        ...
        outputs = model(inputs)
    ###How to combine outputs from different models...

for mdl_name in mdl_list:
    mdl = ...
    tf = mdl.tf
    dataset = ImageLoader(img_path, tf)
    dataloader = DataLoader(dataset, batch_size, shuffle=True)
    mdl.load_state_dict(...)
    ###How to combine outputs from different models...

Thanks in advance.

You could add the different transformations into your Dataset and return the N transformed samples for each model.
Using this approach you would have to iterate your dataset only once.

Thanks @ptrblck. So do you mean I cannot use batch training, but use the whole dataset as one batch? I think this is not what I want and will raise out of memory problem.

Sorry, I think I do not quite get your point.

Sorry for not explaining it clearly.
The use case would be to return N transformed samples instead of one:

def __getitem__(self, index):
    x = self.data[index]
    x1 = self.transform1(x)
    x2 = self.transform2(x)
    ...
    return x1, x2, ...

Now in your DataLoader you will get a batch of samples in all transformations. You could try to return a dict of transformed samples, if you find it easier to get the right one for the models.

Thanks @ptrblck. To be honest, I feel this method is a little cumbersome, as you need to carefully specify the correct model corresponding to the portion of your input, and what I desire is some concatenation of output tensors from different models, and use the whole big tensor as input to following ensemble classifier (for training). Anyway, thank you a lot for your kind help.