RuntimeError: Address already in use

Hi,

I run distributed training on the computer with 8 GPUs.

I first run the command:

CUDA_VISIBLE_DEVICES=6,7 MASTER_ADDR=localhost MASTER_PORT=47144 WROLD_SIZE=2 python -m torch.distributed.launch --nproc_per_node=2 example_top_api.py

I then run command:

CUDA_VISIBLE_DEVICES=4,5 MASTER_ADDR=localhost MASTER_PORT=47149 WROLD_SIZE=2 python -m torch.distributed.launch --nproc_per_node=2 example_top_api.py

however, I encountered the following issue. what/how should I do to run 2 cases on the same computer?

Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 173, in <module>
    main()
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 169, in main
    run(args)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 621, in run
    elastic_launch(
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 116, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapper
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 238, in launch_agent
    result = agent.run()
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
    result = f(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/agent/server/api.py", line 700, in run
    result = self._invoke_run(role)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/agent/server/api.py", line 822, in _invoke_run
    self._initialize_workers(self._worker_group)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
    result = f(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/agent/server/api.py", line 670, in _initialize_workers
    self._rendezvous(worker_group)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
    result = f(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/agent/server/api.py", line 530, in _rendezvous
    store, group_rank, group_world_size = spec.rdzv_handler.next_rendezvous()
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/rendezvous/static_tcp_rendezvous.py", line 55, in next_rendezvous
    self._store = TCPStore(
RuntimeError: Address already in use



Hi, on a single node you only need to use torch.distributed.launch once to launch all processes on the node. Re-launching can run into connectivity issues as a TCPStore is already spawned on the host.

@rvarm1 ,
Thank you!
I would like to run 2 or more tasks which are independent on one computer.
All those commands are different ports and difference GPU.
Is there a way to do that?

@rvarm1 ,

I firstly tried the following 2 commands to start to 2 tasks which include 2 sub-processes respectively. but I encountered the Address already in use issue.

CUDA_VISIBLE_DEVICES=1,3 WORLD_SIZE=2 MASTER_PORT=44144 python -m torch.distributed.launch --nproc_per_node=2 train.py

CUDA_VISIBLE_DEVICES=4,5 WORLD_SIZE=2 MASTER_PORT=44145 python -m torch.distributed.launch --nproc_per_node=2 train.py

I then use the following 2 commands to start 2 tasks. 2 tasks with 2 sub-prcesses are started successfully respectively.

CUDA_VISIBLE_DEVICES=1,3 WORLD_SIZE=2 python -m torch.distributed.launch --nproc_per_node=2 --master_port 47769 train.py

CUDA_VISIBLE_DEVICES=4,5 WORLD_SIZE=2 python -m torch.distributed.launch --nproc_per_node=2 --master_port 47770 train.py
6 Likes

@Ardeal How you find listening port number correctly?