Pytorch shared memory behavior in multi processing

Hi , I am using multiprocessing framework to speed up the CPU processing data into tensor following the best practice on

suppose I have list [1,2,...32] dataset, I have 2 outer process, each process with a 4 worker data loader, so

  • process 1 will load
  • process 2 will load

so in each process, I have 4 worker data loader, so let’s see one data loader in process1 will only load

all these process and multi worker dataloader is to speed up the data loading and data cooking, however, I only have one GPU and one model, so in this case, should I use share_memory() ? so each process will just process partial dataset, but their tensor will be shared?