My python program crashed with PyTorch >1.10.1

Hello, I’m working on a deep learning project with a ViT-like backbone. When I train a model under PyTorch > 1.10.1 (I’ve already tried 1.12.0, 1.12.1, 1.13.0), my program crashed. It will show the following error:

[E ProcessGroupNCCL.cpp:737] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=63841, OpType=BROADCAST, Timeout(ms)=1800000) ran for 1800880 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:737] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=63841, OpType=BROADCAST, Timeout(ms)=1800000) ran for 1800908 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:737] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=63841, OpType=BROADCAST, Timeout(ms)=1800000) ran for 1800958 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:414] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
[E ProcessGroupNCCL.cpp:414] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
terminate called after throwing an instance of 'std::runtime_error'
  what():  [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=63841, OpType=BROADCAST, Timeout(ms)=1800000) ran for 1800880 milliseconds before timing out.
terminate called after throwing an instance of 'std::runtime_error'
  what():  [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=63841, OpType=BROADCAST, Timeout(ms)=1800000) ran for 1800958 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:414] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
terminate called after throwing an instance of 'std::runtime_error'
  what():  [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=63841, OpType=BROADCAST, Timeout(ms)=1800000) ran for 1800908 milliseconds before timing out.
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 1000892 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -6) local_rank: 1 (pid: 1000893) of binary: /opt/conda/envs/lwh3/bin/python
Traceback (most recent call last):
  File "/opt/conda/envs/lwh3/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/opt/conda/envs/lwh3/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/opt/conda/envs/lwh3/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in <module>
    main()
  File "/opt/conda/envs/lwh3/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
    launch(args)
  File "/opt/conda/envs/lwh3/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
    run(args)
  File "/opt/conda/envs/lwh3/lib/python3.8/site-packages/torch/distributed/run.py", line 752, in run
    elastic_launch(
  File "/opt/conda/envs/lwh3/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/opt/conda/envs/lwh3/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
========================================================
lib/train/run_training.py FAILED
--------------------------------------------------------
Failures:
[1]:
  time      : 2022-11-12_10:44:17
  host      : ed85ab297bc3
  rank      : 2 (local_rank: 2)
  exitcode  : -6 (pid: 1000894)
  error_file: <N/A>
  traceback : Signal 6 (SIGABRT) received by PID 1000894
[2]:
  time      : 2022-11-12_10:44:17
  host      : ed85ab297bc3
  rank      : 3 (local_rank: 3)
  exitcode  : -6 (pid: 1000895)
  error_file: <N/A>
  traceback : Signal 6 (SIGABRT) received by PID 1000895
--------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2022-11-12_10:44:17
  host      : ed85ab297bc3
  rank      : 1 (local_rank: 1)
  exitcode  : -6 (pid: 1000893)
  error_file: <N/A>
  traceback : Signal 6 (SIGABRT) received by PID 1000893
========================================================

When I use PyTorch 1.10.1, everything went well, however I need to use some funtions which only exist in PyTorch >= 1.12.0. This problem really confused me, I’m wondering if anyone could help me with this?

Rerun your code with export NCCL_DEBUG=INFO and check which errors or warnings NCCL raises before crashing.