How to set two dataloaders, of the same dataset, to the same shuffle?

Clearly, we need to manipulate the random seeding to achieve this, I have tried the one below, but I am getting different combinations in the two dataloaders of the same data set.

def random_seeding(seed_value):    
    if seed_value>0:
        np.random.seed(seed_value)
        torch.manual_seed(seed_value)   
        random.seed(seed_value)    
        if cuda: torch.cuda.manual_seed_all(seed_value)

Oh, and I am using next(iter...) to get the batches from the two dataloaders, as follows:

img_batch1 = next(iter(dataloader_1))
img_batch2 = next(iter(dataloader_2))

img_batch1 is different from img_batch2!

NB. Each dataloader is using num_workers=1
NB2. Uasing state = torch.get_rng_state(), before the first loader, then, torch.set_rng_state(state) before the second loader, did not help neither

I think you need to set the seed in the worker_init_fn as described in the docs:

By default, each worker will have its PyTorch seed set to base_seed + worker_id , where base_seed is a long generated by main process using its RNG. However, seeds for other libraies may be duplicated upon initializing workers (w.g., NumPy), causing each worker to return identical random numbers. (See My data loader workers return identical random numbers section in FAQ.) You may use torch.initial_seed() to access the PyTorch seed for each worker in worker_init_fn , and use it to set other seeds before data loading.

4 Likes

I 've tested both dataloaders and they are still not synchronized correctly; again, I use
next(iter(..)), as in

img_batch1 = next(iter(dataloader_1))  
img_batch2 = next(iter(dataloader_2))

I am still getting different images in img_batch1 than in img_batch2

Note. I had to set the number of workers to zero to get it working.

random_seed()
dataloader_1 = DataLoader(test_set1, 
batch_size= 5, 
shuffle=True, 
num_workers=0,
worker_init_fn = torch.initial_seed()
)
random_seed()
dataloader_2 = DataLoader(test_set2, 
batch_size= 5, 
shuffle=True, 
num_workers=0,
worker_init_fn = torch.initial_seed()
)

Hello @Deeply ,

I think torch.manaul_seed() can guarantee the same shuffle sequence in each data_loader separately, but it cannot perform well between data_loaders. And I think worker_init_fn works in worker-wise in a dataloader, it can guarantee the random operations (like transformations) to the same in each process of a dataloader, so I think setting worker_init_fn to the same is not a proper solution.

If I am wrong, please correct me.

According to the source, when you set shuffle=True it will create a RandomSampler. In random_sampler it calls torch.randint to implement the random indices sequence, it is what the problem is. By setting torch.manaul_seed can guarantee the same sequence in each data_loader separately but why not all the data_loader?

Another method, you could pass the same RandomSampler to your dataloaders so that they will get the same random sequence I believe.

1 Like

Thanks for sharing your thoughts on this issue. Allow me to say that torch.manual_seed did not help at all, neither did worker_init_fn, not sure if I am doing something wrong. And; you are correct, passing RandomSampler would solve the issue. As a simple solution to my problem, However, I enabled the dataset class to return two instances of the same image instead of one, each transformed differently.

2 Likes