NCCL Error with NCCL WARN Cuda failure 'initialization error'

When I was running distributed training based on k8s and RDMA communication, I encountered the following error:
NCCL WARN Cuda failure ‘initialization error’

rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4275:4275 [0] NCCL INFO Bootstrap : Using eth0:10.224.2.49<0>
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4275:4275 [0] NCCL INFO cudaDriverVersion 12020
rdma-test-gpu-worker-0: NCCL version 2.18.3+cuda12.2
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4282:4282 [7] NCCL INFO cudaDriverVersion 12020
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4282:4282 [7] NCCL INFO Bootstrap : Using eth0:10.224.2.49<0>
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4277:4277 [2] NCCL INFO cudaDriverVersion 12020
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4277:4277 [2] NCCL INFO Bootstrap : Using eth0:10.224.2.49<0>
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4278:4278 [3] NCCL INFO cudaDriverVersion 12020
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4278:4278 [3] NCCL INFO Bootstrap : Using eth0:10.224.2.49<0>
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4281:4281 [6] NCCL INFO cudaDriverVersion 12020
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4281:4281 [6] NCCL INFO Bootstrap : Using eth0:10.224.2.49<0>
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4280:4280 [5] NCCL INFO cudaDriverVersion 12020
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4280:4280 [5] NCCL INFO Bootstrap : Using eth0:10.224.2.49<0>
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4276:4276 [1] NCCL INFO cudaDriverVersion 12020
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4276:4276 [1] NCCL INFO Bootstrap : Using eth0:10.224.2.49<0>
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4279:4279 [4] NCCL INFO cudaDriverVersion 12020
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4279:4279 [4] NCCL INFO Bootstrap : Using eth0:10.224.2.49<0>
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2918:2918 [4] NCCL INFO cudaDriverVersion 12020
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4282:5846 [7] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4282:5846 [7] NCCL INFO P2P plugin IBext
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4278:5847 [3] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4278:5847 [3] NCCL INFO P2P plugin IBext
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4275:5845 [0] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4275:5845 [0] NCCL INFO P2P plugin IBext
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4282:5846 [7] NCCL INFO NET/IB : Using [0]mlx5_50:1/IB/SHARP [RO]; OOB eth0:10.224.2.49<0>
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4282:5846 [7] NCCL INFO Using network IBext
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4278:5847 [3] NCCL INFO NET/IB : Using [0]mlx5_50:1/IB/SHARP [RO]; OOB eth0:10.224.2.49<0>
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4278:5847 [3] NCCL INFO Using network IBext
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4275:5845 [0] NCCL INFO NET/IB : Using [0]mlx5_50:1/IB/SHARP [RO]; OOB eth0:10.224.2.49<0>
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4275:5845 [0] NCCL INFO Using network IBext
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4277:5848 [2] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4277:5848 [2] NCCL INFO P2P plugin IBext
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4277:5848 [2] NCCL INFO NET/IB : Using [0]mlx5_50:1/IB/SHARP [RO]; OOB eth0:10.224.2.49<0>
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4277:5848 [2] NCCL INFO Using network IBext
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4281:5849 [6] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4281:5849 [6] NCCL INFO P2P plugin IBext
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4281:5849 [6] NCCL INFO NET/IB : Using [0]mlx5_50:1/IB/SHARP [RO]; OOB eth0:10.224.2.49<0>
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4281:5849 [6] NCCL INFO Using network IBext
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2919:2919 [5] NCCL INFO cudaDriverVersion 12020
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2914:2914 [0] NCCL INFO cudaDriverVersion 12020
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2917:2917 [3] NCCL INFO cudaDriverVersion 12020
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2916:2916 [2] NCCL INFO cudaDriverVersion 12020
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4280:5850 [5] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4280:5850 [5] NCCL INFO P2P plugin IBext
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4280:5850 [5] NCCL INFO NET/IB : Using [0]mlx5_50:1/IB/SHARP [RO]; OOB eth0:10.224.2.49<0>
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4280:5850 [5] NCCL INFO Using network IBext
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2915:2915 [1] NCCL INFO cudaDriverVersion 12020
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2918:2918 [4] misc/cudawrap.cc:33 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2918:2918 [4] NCCL INFO Bootstrap : Using eth0:10.224.3.43<0>
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2918:4483 [4] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2918:4483 [4] NCCL INFO P2P plugin IBext
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2918:4483 [4] NCCL INFO NET/IB : Using [0]mlx5_54:1/IB/SHARP [RO]; OOB eth0:10.224.3.43<0>
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2918:4483 [4] NCCL INFO Using network IBext
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4276:5851 [1] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4276:5851 [1] NCCL INFO P2P plugin IBext
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4276:5851 [1] NCCL INFO NET/IB : Using [0]mlx5_50:1/IB/SHARP [RO]; OOB eth0:10.224.2.49<0>
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4276:5851 [1] NCCL INFO Using network IBext
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2918:4483 [4] init.cc:263 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2920:2920 [6] NCCL INFO cudaDriverVersion 12020
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2919:2919 [5] misc/cudawrap.cc:33 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2914:2914 [0] misc/cudawrap.cc:33 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2919:2919 [5] NCCL INFO Bootstrap : Using eth0:10.224.3.43<0>
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2914:2914 [0] NCCL INFO Bootstrap : Using eth0:10.224.3.43<0>
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2915:2915 [1] misc/cudawrap.cc:33 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2915:2915 [1] NCCL INFO Bootstrap : Using eth0:10.224.3.43<0>
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2917:2917 [3] misc/cudawrap.cc:33 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2916:2916 [2] misc/cudawrap.cc:33 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2917:2917 [3] NCCL INFO Bootstrap : Using eth0:10.224.3.43<0>
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2916:2916 [2] NCCL INFO Bootstrap : Using eth0:10.224.3.43<0>
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2914:4490 [0] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2914:4490 [0] NCCL INFO P2P plugin IBext
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2919:4491 [5] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2919:4491 [5] NCCL INFO P2P plugin IBext
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2914:4490 [0] NCCL INFO NET/IB : Using [0]mlx5_54:1/IB/SHARP [RO]; OOB eth0:10.224.3.43<0>
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2914:4490 [0] NCCL INFO Using network IBext
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2917:4492 [3] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2917:4492 [3] NCCL INFO P2P plugin IBext
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2916:4494 [2] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2916:4494 [2] NCCL INFO P2P plugin IBext
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2915:4495 [1] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2915:4495 [1] NCCL INFO P2P plugin IBext
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2919:4491 [5] NCCL INFO NET/IB : Using [0]mlx5_54:1/IB/SHARP [RO]; OOB eth0:10.224.3.43<0>
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2919:4491 [5] NCCL INFO Using network IBext
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2917:4492 [3] NCCL INFO NET/IB : Using [0]mlx5_54:1/IB/SHARP [RO]; OOB eth0:10.224.3.43<0>
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2917:4492 [3] NCCL INFO Using network IBext
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2916:4494 [2] NCCL INFO NET/IB : Using [0]mlx5_54:1/IB/SHARP [RO]; OOB eth0:10.224.3.43<0>
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2916:4494 [2] NCCL INFO Using network IBext
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2915:4495 [1] NCCL INFO NET/IB : Using [0]mlx5_54:1/IB/SHARP [RO]; OOB eth0:10.224.3.43<0>
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2915:4495 [1] NCCL INFO Using network IBext
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2914:4490 [0] init.cc:263 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2919:4491 [5] init.cc:263 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2917:4492 [3] init.cc:263 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2916:4494 [2] init.cc:263 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4279:5852 [4] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4279:5852 [4] NCCL INFO P2P plugin IBext
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4279:5852 [4] NCCL INFO NET/IB : Using [0]mlx5_50:1/IB/SHARP [RO]; OOB eth0:10.224.2.49<0>
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4279:5852 [4] NCCL INFO Using network IBext
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2915:4495 [1] init.cc:263 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2920:2920 [6] misc/cudawrap.cc:33 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2920:2920 [6] NCCL INFO Bootstrap : Using eth0:10.224.3.43<0>
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2920:4501 [6] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2920:4501 [6] NCCL INFO P2P plugin IBext
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2920:4501 [6] NCCL INFO NET/IB : Using [0]mlx5_54:1/IB/SHARP [RO]; OOB eth0:10.224.3.43<0>
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2920:4501 [6] NCCL INFO Using network IBext
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2920:4501 [6] init.cc:263 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2921:2921 [7] NCCL INFO cudaDriverVersion 12020
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2921:2921 [7] misc/cudawrap.cc:33 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2921:2921 [7] NCCL INFO Bootstrap : Using eth0:10.224.3.43<0>
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2921:4504 [7] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2921:4504 [7] NCCL INFO P2P plugin IBext
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2921:4504 [7] NCCL INFO NET/IB : Using [0]mlx5_54:1/IB/SHARP [RO]; OOB eth0:10.224.3.43<0>
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2921:4504 [7] NCCL INFO Using network IBext
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2921:4504 [7] init.cc:263 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2921:4504 [7] NCCL INFO comm 0x564f059bcaa0 rank 15 nranks 16 cudaDev 7 nvmlDev 7 busId d6000 commId 0x42c314a7d8d38a3a - Init START
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4281:5849 [6] NCCL INFO comm 0x557ee8b5bc50 rank 6 nranks 16 cudaDev 6 nvmlDev 6 busId d5000 commId 0x42c314a7d8d38a3a - Init START
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4279:5852 [4] NCCL INFO comm 0x55e34df3d2d0 rank 4 nranks 16 cudaDev 4 nvmlDev 4 busId ce000 commId 0x42c314a7d8d38a3a - Init START
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4280:5850 [5] NCCL INFO comm 0x55e738a5fcb0 rank 5 nranks 16 cudaDev 5 nvmlDev 5 busId d1000 commId 0x42c314a7d8d38a3a - Init START
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4277:5848 [2] NCCL INFO comm 0x55f3e4a70240 rank 2 nranks 16 cudaDev 2 nvmlDev 2 busId 56000 commId 0x42c314a7d8d38a3a - Init START
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4275:5845 [0] NCCL INFO comm 0x558aed296100 rank 0 nranks 16 cudaDev 0 nvmlDev 0 busId 4f000 commId 0x42c314a7d8d38a3a - Init START
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4278:5847 [3] NCCL INFO comm 0x558c3a09ed10 rank 3 nranks 16 cudaDev 3 nvmlDev 3 busId 57000 commId 0x42c314a7d8d38a3a - Init START
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4276:5851 [1] NCCL INFO comm 0x55dc183e8bd0 rank 1 nranks 16 cudaDev 1 nvmlDev 1 busId 52000 commId 0x42c314a7d8d38a3a - Init START
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2920:4501 [6] NCCL INFO comm 0x55c1bb82e720 rank 14 nranks 16 cudaDev 6 nvmlDev 6 busId d5000 commId 0x42c314a7d8d38a3a - Init START
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2919:4491 [5] NCCL INFO comm 0x55e3142238c0 rank 13 nranks 16 cudaDev 5 nvmlDev 5 busId d1000 commId 0x42c314a7d8d38a3a - Init START
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2918:4483 [4] NCCL INFO comm 0x55784948e760 rank 12 nranks 16 cudaDev 4 nvmlDev 4 busId ce000 commId 0x42c314a7d8d38a3a - Init START
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2917:4492 [3] NCCL INFO comm 0x5647fd465340 rank 11 nranks 16 cudaDev 3 nvmlDev 3 busId 57000 commId 0x42c314a7d8d38a3a - Init START
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2916:4494 [2] NCCL INFO comm 0x55a961461df0 rank 10 nranks 16 cudaDev 2 nvmlDev 2 busId 56000 commId 0x42c314a7d8d38a3a - Init START
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2915:4495 [1] NCCL INFO comm 0x55d772e9f530 rank 9 nranks 16 cudaDev 1 nvmlDev 1 busId 52000 commId 0x42c314a7d8d38a3a - Init START
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2914:4490 [0] NCCL INFO comm 0x55dd4bc55110 rank 8 nranks 16 cudaDev 0 nvmlDev 0 busId 4f000 commId 0x42c314a7d8d38a3a - Init START
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4282:5846 [7] NCCL INFO comm 0x562d7fe014c0 rank 7 nranks 16 cudaDev 7 nvmlDev 7 busId d6000 commId 0x42c314a7d8d38a3a - Init START
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4281:5849 [6] NCCL INFO Setting affinity for GPU 6 to ffff,fff00000,00ffffff,f0000000
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4281:5849 [6] NCCL INFO NVLS multicast support is not available on dev 6
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4277:5848 [2] NCCL INFO Setting affinity for GPU 2 to 0fffff,ff000000,0fffffff
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4277:5848 [2] NCCL INFO NVLS multicast support is not available on dev 2
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4276:5851 [1] NCCL INFO Setting affinity for GPU 1 to 0fffff,ff000000,0fffffff
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4276:5851 [1] NCCL INFO NVLS multicast support is not available on dev 1
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4282:5846 [7] NCCL INFO Setting affinity for GPU 7 to ffff,fff00000,00ffffff,f0000000
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4282:5846 [7] NCCL INFO NVLS multicast support is not available on dev 7
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4278:5847 [3] NCCL INFO Setting affinity for GPU 3 to 0fffff,ff000000,0fffffff
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4278:5847 [3] NCCL INFO NVLS multicast support is not available on dev 3
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4280:5850 [5] NCCL INFO Setting affinity for GPU 5 to ffff,fff00000,00ffffff,f0000000
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4280:5850 [5] NCCL INFO NVLS multicast support is not available on dev 5
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4279:5852 [4] NCCL INFO Setting affinity for GPU 4 to ffff,fff00000,00ffffff,f0000000
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4279:5852 [4] NCCL INFO NVLS multicast support is not available on dev 4
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4275:5845 [0] NCCL INFO Setting affinity for GPU 0 to 0fffff,ff000000,0fffffff
rdma-test-gpu-worker-0: rdma-test-gpu-worker-0:4275:5845 [0] NCCL INFO NVLS multicast support is not available on dev 0
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2918:4483 [4] NCCL INFO Setting affinity for GPU 4 to ffff,fff00000,00ffffff,f0000000
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2918:4483 [4] transport/nvls.cc:245 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2918:4483 [4] NCCL INFO init.cc:872 -> 1
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2918:4483 [4] NCCL INFO init.cc:1358 -> 1
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2918:4483 [4] NCCL INFO group.cc:65 -> 1 [Async thread]
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2918:2918 [4] NCCL INFO group.cc:406 -> 1
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2918:2918 [4] NCCL INFO group.cc:96 -> 1
rdma-test-gpu-worker-1: Traceback (most recent call last):
rdma-test-gpu-worker-1:   File "/mnt/nas/workspace/tuning/src/train_bash.py", line 14, in <module>
rdma-test-gpu-worker-1:     main()
rdma-test-gpu-worker-1:   File "/mnt/nas/workspace/tuning/src/train_bash.py", line 5, in main
rdma-test-gpu-worker-1:     run_exp()
rdma-test-gpu-worker-1:   File "/mnt/nas/workspace/tuning/src/llmtuner/tuner/tune.py", line 26, in run_exp
rdma-test-gpu-worker-1:     run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
rdma-test-gpu-worker-1:   File "/mnt/nas/workspace/tuning/src/llmtuner/tuner/sft/workflow.py", line 29, in run_sft
rdma-test-gpu-worker-1:     dataset = preprocess_dataset(dataset, tokenizer, data_args, training_args, stage="sft")
rdma-test-gpu-worker-1:   File "/mnt/nas/workspace/tuning/src/llmtuner/dsets/preprocess.py", line 158, in preprocess_dataset
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2919:4491 [5] NCCL INFO Setting affinity for GPU 5 to ffff,fff00000,00ffffff,f0000000
rdma-test-gpu-worker-1: 
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2919:4491 [5] transport/nvls.cc:245 NCCL WARN Cuda failure 'initialization error'
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2919:4491 [5] NCCL INFO init.cc:872 -> 1
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2919:4491 [5] NCCL INFO init.cc:1358 -> 1
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2919:4491 [5] NCCL INFO group.cc:65 -> 1 [Async thread]
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2919:2919 [5] NCCL INFO group.cc:406 -> 1
rdma-test-gpu-worker-1: rdma-test-gpu-worker-1:2919:2919 [5] NCCL INFO group.cc:96 -> 1
rdma-test-gpu-worker-1:     with training_args.main_process_first(desc="dataset map pre-processing"):
rdma-test-gpu-worker-1:   File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
rdma-test-gpu-worker-1:     return next(self.gen)
rdma-test-gpu-worker-1:   File "/usr/local/lib/python3.10/dist-packages/transformers/training_args.py", line 2040, in main_process_first
rdma-test-gpu-worker-1:     dist.barrier()
rdma-test-gpu-worker-1:   File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 47, in wrapper
rdma-test-gpu-worker-1:     return func(*args, **kwargs)
rdma-test-gpu-worker-1:   File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 3665, in barrier
rdma-test-gpu-worker-1:     work = default_pg.barrier(opts=opts)
rdma-test-gpu-worker-1: torch.distributed.DistBackendError: NCCL error in: /opt/pytorch/pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1197, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.18.3
rdma-test-gpu-worker-1: ncclUnhandledCudaError: Call to CUDA function failed.
rdma-test-gpu-worker-1: Last error:
rdma-test-gpu-worker-1: Cuda failure 'initialization error'
rdma-test-gpu-worker-1: Traceback (most recent call last):
rdma-test-gpu-worker-1:   File "/mnt/nas/workspace/tuning/src/train_bash.py", line 14, in <module>
rdma-test-gpu-worker-1:     main()
rdma-test-gpu-worker-1:   File "/mnt/nas/workspace/tuning/src/train_bash.py", line 5, in main
rdma-test-gpu-worker-1:     run_exp()
rdma-test-gpu-worker-1:   File "/mnt/nas/workspace/tuning/src/llmtuner/tuner/tune.py", line 26, in run_exp
rdma-test-gpu-worker-1:     run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
rdma-test-gpu-worker-1:   File "/mnt/nas/workspace/tuning/src/llmtuner/tuner/sft/workflow.py", line 29, in run_sft
rdma-test-gpu-worker-1:     dataset = preprocess_dataset(dataset, tokenizer, data_args, training_args, stage="sft")
rdma-test-gpu-worker-1:   File "/mnt/nas/workspace/tuning/src/llmtuner/dsets/preprocess.py", line 158, in preprocess_dataset
rdma-test-gpu-worker-1:     with training_args.main_process_first(desc="dataset map pre-processing"):
rdma-test-gpu-worker-1:   File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
rdma-test-gpu-worker-1:     return next(self.gen)
rdma-test-gpu-worker-1:   File "/usr/local/lib/python3.10/dist-packages/transformers/training_args.py", line 2040, in main_process_first
rdma-test-gpu-worker-1:     dist.barrier()
rdma-test-gpu-worker-1:   File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 47, in wrapper
rdma-test-gpu-worker-1:     return func(*args, **kwargs)
rdma-test-gpu-worker-1:   File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 3665, in barrier
rdma-test-gpu-worker-1:     work = default_pg.barrier(opts=opts)
rdma-test-gpu-worker-1: torch.distributed.DistBackendError: NCCL error in: /opt/pytorch/pytorch/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1197, unhandled cuda error (run with NCCL_DEBUG=INFO for details), NCCL version 2.18.3
rdma-test-gpu-worker-1: ncclUnhandledCudaError: Call to CUDA function failed.
`

env info:

python -m torch.utils.collect_env

/usr/lib/python3.10/runpy.py:126: RuntimeWarning: ‘torch.utils.collect_env’ found in sys.modules after import of package ‘torch.utils’, but prior to execution of ‘torch.utils.collect_env’; this may result in unpredictable behaviour
warn(RuntimeWarning(msg))
Collecting environment information…
PyTorch version: 2.1.0a0+29c30b1
Is debug build: False
CUDA used to build PyTorch: 12.2
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.1
Libc version: glibc-2.35

Python version: 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.4.242-1.el7.elrepo.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.128
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB

Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.4
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 112
On-line CPU(s) list: 0-111
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6348 CPU @ 2.60GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
Stepping: 6
Frequency boost: enabled
CPU max MHz: 2601.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 2.6 MiB (56 instances)
L1i cache: 1.8 MiB (56 instances)
L2 cache: 70 MiB (56 instances)
L3 cache: 84 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-27,56-83
NUMA node1 CPU(s): 28-55,84-111
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected

Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.1.0a0+29c30b1
[pip3] torch-tensorrt==2.0.0.dev0
[pip3] torchdata==0.7.0a0
[pip3] torchtext==0.16.0a0
[pip3] torchvision==0.16.0a0
[pip3] triton==2.1.0+440fd1b
[conda] Could not collect