Torchrun seems to launch more ranks causing error

Hi, what could cause torchrun to “miscalculate” the number of ranks? I’ve two Slurm clusters, but only on one of those the where torchrun launched too many ranks, causing error.

NCCL seems to detect this and print warning bootstrap.cc:130 NCCL WARN Bootstrap Root : mismatch in rank count from procs 8 : 16 before crashing.

Reproducible by torchrun-ning /usr/bin/hostname over two nodes (Below the stack trace).

0: Traceback (most recent call last):
0:   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/bin/torchrun", line 8, in <module>
0:     sys.exit(main())
0:   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
0:     return f(*args, **kwargs)
0:   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/lib/python3.8/site-packages/torch/distributed/run.py", line 812, in main
0:     run(args)
0:   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/lib/python3.8/site-packages/torch/distributed/run.py", line 803, in run
0:     elastic_launch(
0:   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
0:     return launch_agent(self._config, self._entrypoint, list(args))
0:   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
0:     result = agent.run()
0:   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py", line 123, in wrapper
0:     result = f(*args, **kwargs)
0:   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 727, in run
0:     result = self._invoke_run(role)
0:   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 862, in _invoke_run
0:     self._initialize_workers(self._worker_group)
0:   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py", line 123, in wrapper
0:     result = f(*args, **kwargs)
0:   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 699, in _initialize_workers
0:     self._rendezvous(worker_group)
0:   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py", line 123, in wrapper
0:     result = f(*args, **kwargs)
0:   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 545, in _rendezvous
0:     workers = self._assign_worker_ranks(store, group_rank, group_world_size, spec)
0:   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/lib/python3.8/site-packages/torch/distributed/elastic/metrics/api.py", line 123, in wrapper
0:     result = f(*args, **kwargs)
0:   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 633, in _assign_worker_ranks
0:     my_role_info = role_infos[group_rank]
0: IndexError: list index out of range
srun: error: ip-10-1-71-217: task 0: Exited with exit code 1
1: ip-10-1-113-136
1: ip-10-1-113-136
1: ip-10-1-113-136
1: ip-10-1-113-136
1: ip-10-1-113-136
1: ip-10-1-113-136
1: ip-10-1-113-136
1: ip-10-1-113-136
1: [2024-02-14 14:34:09,431] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [WARNING] The node 'ip-10-1-113-136.us-west-2.compute.internal_901704_0' has failed to send a keep-alive heartbeat to the rendezvous '382' due to an error of type RendezvousConnectionError.
1: [2024-02-14 14:34:10,883] torch.distributed.elastic.agent.server.api: [ERROR] Error waiting on exit barrier. Elapsed: 0.002970457077026367 seconds
1: [2024-02-14 14:34:10,883] torch.distributed.elastic.agent.server.api: [ERROR] Traceback (most recent call last):
1: [2024-02-14 14:34:10,883] torch.distributed.elastic.agent.server.api: [ERROR]   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/lib/python3.8/site-packages/torch/distributed/elastic/agent/server/api.py", line 929, in _exit_barrier
1: [2024-02-14 14:34:10,883] torch.distributed.elastic.agent.server.api: [ERROR]     store_util.barrier(
1: [2024-02-14 14:34:10,883] torch.distributed.elastic.agent.server.api: [ERROR]   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py", line 78, in barrier
1: [2024-02-14 14:34:10,883] torch.distributed.elastic.agent.server.api: [ERROR]     synchronize(store, data, rank, world_size, key_prefix, barrier_timeout)
1: [2024-02-14 14:34:10,883] torch.distributed.elastic.agent.server.api: [ERROR]   File "/fsx/ubuntu/awsome-distributed-training/3.test_cases/1.megatron-lm/.venv/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py", line 63, in synchronize
1: [2024-02-14 14:34:10,883] torch.distributed.elastic.agent.server.api: [ERROR]     store.set(f"{key_prefix}{rank}", data)
1: [2024-02-14 14:34:10,883] torch.distributed.elastic.agent.server.api: [ERROR] torch.distributed.DistNetworkError: Broken pipe
1: [2024-02-14 14:34:10,896] torch.distributed.elastic.rendezvous.dynamic_rendezvous: [WARNING] The node 'ip-10-1-113-136.us-west-2.compute.internal_901704_0' has failed to shutdown the rendezvous '382' due to an error of type RendezvousConnectionError.

@ptrblck Do you have any insight on what could be causing this or have you seen this issue before?