Reproducibility and Number of DataLoader Workers

For reproducibility in the experiments I’m running, I’m seeding at the beginning of my code and calling torch.use_deterministic_algorithms. However, if I use a different number of dataloader workers on two different runs, will I get repeatable results for the same seed, or do I need to ensure that the number of workers is also consistent?

I’m using pytorch lightning, so I’m calling it’s pl.seed_everything method for seeding - if that makes a difference.