Hi,
I run distributed training on the computer with 8 GPUs.
I first run the command:
CUDA_VISIBLE_DEVICES=6,7 MASTER_ADDR=localhost MASTER_PORT=47144 WROLD_SIZE=2 python -m torch.distributed.launch --nproc_per_node=2 example_top_api.py
I then run command:
CUDA_VISIBLE_DEVICES=4,5 MASTER_ADDR=localhost MASTER_PORT=47149 WROLD_SIZE=2 python -m torch.distributed.launch --nproc_per_node=2 example_top_api.py
however, I encountered the following issue. what/how should I do to run 2 cases on the same computer?
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 173, in <module>
main()
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 169, in main
run(args)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 621, in run
elastic_launch(
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 116, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 238, in launch_agent
result = agent.run()
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
result = f(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/agent/server/api.py", line 700, in run
result = self._invoke_run(role)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/agent/server/api.py", line 822, in _invoke_run
self._initialize_workers(self._worker_group)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
result = f(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/agent/server/api.py", line 670, in _initialize_workers
self._rendezvous(worker_group)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/metrics/api.py", line 125, in wrapper
result = f(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/agent/server/api.py", line 530, in _rendezvous
store, group_rank, group_world_size = spec.rdzv_handler.next_rendezvous()
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/rendezvous/static_tcp_rendezvous.py", line 55, in next_rendezvous
self._store = TCPStore(
RuntimeError: Address already in use