Two different datasets with different sizes

Consider I have one dataset with 40000 samples and another with 10000 samples.
And I want 80 and 20 samples per each mini-batch respectively so that in one epoch, all samples can be iterated.
What is the best way to implement?
I understand how to use my own data so I tried to use two different data loader. But I’m not sure how to iterate over two different data loader.

This should help : my old answer

Would that work for you??

Thanks for the reply. I don’t understand completely yet, but is it possible to restrict the number of samples from each dataset in a mini-batch with your solution? For example, 80 from dataset 1 and 20 from dataset 2.

Elaborating : Create two torch.utils.Dataset classes of the two different data you have. Then create a third dataset class that has it’s element instances of those 2 classes. The __getitem__ method of this third fusion dataset class would call the 2 datasets with probability 4:1.

Or you could do something like :

def __init__( : 
    self.data1 = #call first instance
    self.data2 = #call second instance
    self.size1 = 80
    self.size2 = 20

def __getitem__(self,index):
    if (index<self.size1):
        return self.data1[index]
        return self.data2[index-self.size1]

That’s just a rough outline, you can add more elements to the class