RuntimeError: Address already in use


I run distributed training on the computer with 8 GPUs.

I first run the command:

CUDA_VISIBLE_DEVICES=6,7 MASTER_ADDR=localhost MASTER_PORT=47144 WROLD_SIZE=2 python -m torch.distributed.launch --nproc_per_node=2

I then run command:

CUDA_VISIBLE_DEVICES=4,5 MASTER_ADDR=localhost MASTER_PORT=47149 WROLD_SIZE=2 python -m torch.distributed.launch --nproc_per_node=2

however, I encountered the following issue. what/how should I do to run 2 cases on the same computer?

Traceback (most recent call last):
  File "/usr/lib/python3.8/", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/", line 173, in <module>
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/", line 169, in main
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/", line 621, in run
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/", line 116, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/multiprocessing/errors/", line 348, in wrapper
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/", line 238, in launch_agent
    result =
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/metrics/", line 125, in wrapper
    result = f(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/agent/server/", line 700, in run
    result = self._invoke_run(role)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/agent/server/", line 822, in _invoke_run
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/metrics/", line 125, in wrapper
    result = f(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/agent/server/", line 670, in _initialize_workers
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/metrics/", line 125, in wrapper
    result = f(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/agent/server/", line 530, in _rendezvous
    store, group_rank, group_world_size = spec.rdzv_handler.next_rendezvous()
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/rendezvous/", line 55, in next_rendezvous
    self._store = TCPStore(
RuntimeError: Address already in use

Hi, on a single node you only need to use torch.distributed.launch once to launch all processes on the node. Re-launching can run into connectivity issues as a TCPStore is already spawned on the host.

@rvarm1 ,
Thank you!
I would like to run 2 or more tasks which are independent on one computer.
All those commands are different ports and difference GPU.
Is there a way to do that?

@rvarm1 ,

I firstly tried the following 2 commands to start to 2 tasks which include 2 sub-processes respectively. but I encountered the Address already in use issue.

CUDA_VISIBLE_DEVICES=1,3 WORLD_SIZE=2 MASTER_PORT=44144 python -m torch.distributed.launch --nproc_per_node=2

CUDA_VISIBLE_DEVICES=4,5 WORLD_SIZE=2 MASTER_PORT=44145 python -m torch.distributed.launch --nproc_per_node=2

I then use the following 2 commands to start 2 tasks. 2 tasks with 2 sub-prcesses are started successfully respectively.

CUDA_VISIBLE_DEVICES=1,3 WORLD_SIZE=2 python -m torch.distributed.launch --nproc_per_node=2 --master_port 47769

CUDA_VISIBLE_DEVICES=4,5 WORLD_SIZE=2 python -m torch.distributed.launch --nproc_per_node=2 --master_port 47770

@Ardeal How you find listening port number correctly?