Concatenating Dataloader

What is the best way to combine Dataloaders that have different batch sizes and different data shapes while still preserving the multi-process capabilities of the Dataloader?

dl1 = Dataloader(dataset1, batch_size=3)
dl2 = Dataloader(dataset2, batch_size=5)

where data from dl1
data_a.shape = (3, 7) , where 3 is the batch size from dl1
data_b.shape = (3, 11)

where data from dl2
data_c.shape = (5, 13), where 5 is the batch size from dl2

I would like the combined data to be a simple joining of the data
data_a.shape = (3, 7), where 3 is the batch size from dl1
data_b.shape = (3, 11), where 3 is the batch size from dl1
data_c.shape = (5, 13), where 5 is the batch size from dl2

You should be able to iterate these DataLoaders together e.g. via zip:

datasetA = TensorDataset(torch.randn(12, 7), torch.randn(12, 11))
loaderA = DataLoader(datasetA, batch_size=3)

datasetB = TensorDataset(torch.randn(10, 13))
loaderB = DataLoader(datasetB, batch_size=5)

for (dataA, dataB), (dataC,) in zip(loaderA, loaderB):
    print(dataA.shape, dataB.shape, dataC.shape)

Also, itertools might provide you some methods, which e.g. could iterate the longest DataLoader by cycling the shorter one or by returning None instead (the standard zip would use the shortest loader).

I was making things too complicated in my mind for nothing mostly because I though
Dataloader returned something more than an iterator.

Thank you