Share dataloader across trainings

Is there a way to share the same dataloader across different trainings?
let’s say I have 2 trainings of 2 different models that share the same data. instead of having 2 processes with independent dataloaders, I would like to have a the same dataloader and 2 processes that uses the same dataloader during training.
is it possible?

I am sure it is possible if you can synchronize both your trainings somehow and then you can use DDP or DP, but I am not sure I understand you question completely. Are you training your two models on different processes? Or do you just want a way to use the same dataloader for both trainings?

I know how I can use the same dataloader for both training, to achieve this I could just do the forward and backward of both models in the same loop but I was trying to understand if there was another possibility that is train 2 models on difference processes that share the same dataloader. so have 2 independent trainings (on different processes) but where the dataloader is in some way shared

do you have an example to have 2 independent trainings (on different processes) but where the dataloader is in some way shared?

I never tried what you are trying to do, maybe others can help you with how to do it.
Though I believe it’s theoretically possible but it would be way more complicated than just using DDP with normal training(if you want to do so the way you initiate the new process seems to be very important).
What I usually do specially in GAN training is that I define my data loader and then I’ll use a training_step function which basically receives a batch in each step. Then I can have separate functions for let’s say my discriminator_step and generator_step, or in some cases I might even have another module that needs to have a separate backward.