DistributedDataParallel and multiple workers

Hi,
I have trouble using multiple workers with DistributedDataParallel.

  • If I set num_workers=0 + DDP everything works.
  • If I set num_workers > 0 without DDP everything works.
  • If I set num_workers > 0 with DDP I have the following error:
Traceback (most recent call last):
  File "train_new.py", line 170, in <module>
    trainer.train()
  File "/home/matte/PhD/LV-LAB/ELVIS/elvis/trainers/distributed.py", line 38, in train
    mp.spawn(self._distributed_training,
  File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 230, in spawn
    return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
  File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in start_processes
    while not context.join():
  File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 150, in join
    raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException: 

-- Process 0 terminated with the following error:
Traceback (most recent call last):
  File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
    fn(i, *args)
  File "/home/matte/PhD/LV-LAB/ELVIS/elvis/trainers/distributed.py", line 71, in _distributed_training
    self.train_loop()
  File "/home/matte/PhD/LV-LAB/ELVIS/elvis/trainers/base.py", line 144, in train_loop
    for batch in self._trloader:
  File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 355, in __iter__
    return self._get_iterator()
  File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 301, in _get_iterator
    return _MultiProcessingDataLoaderIter(self)
  File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 914, in __init__
    w.start()
  File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/multiprocessing/process.py", line 121, in start
    self._popen = self._Popen(self)
  File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
    return Popen(process_obj)
  File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
    reduction.dump(process_obj, fp)
  File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
  File "/home/matte/anaconda3/envs/lvlab/lib/python3.8/site-packages/torch/multiprocessing/reductions.py", line 240, in reduce_tensor
    event_sync_required) = storage._share_cuda_()
RuntimeError: Attempted to send CUDA tensor received from another process; this is not currently supported. Consider cloning before sending.

I tried to debug it without success. The only thing I know is that the error is caused when I do the first iteration on the dataloader. Anyway, the code crashes before entering the mydataset.__getitem(). Does someone of you have any idea how to understand what is going on?

Hi,

Could you give an example that reproduces this issue?

Seeing
" ```
RuntimeError: Attempted to send CUDA tensor received from another process; this is not currently supported. Consider cloning before sending.

"
makes me think that you are mixing multiprocessing and DDP somehow?

maybe also see:
https://pytorch.org/docs/stable/notes/cuda.html#cuda-nn-ddp-instead

After several hours of debug I have found out the potential problem. In my setup I have initialized my model, moved it on the GPU inside the master process and then re-used it in all the processes composing the DDP. Moving the creation of the model inside each single process (instead of doing it in the master one) solved the problem. I think that the major problem was moving it on the GPU before the creation of multiple processes. My supposition is that PyTorch cannot move the parameters from a process to another if they are CUDA tensor.
So for all future readers, better to create your model after the mp.spawn command :slight_smile:

1 Like