RuntimeError: Distributed package doesn't have NCCL built in

You might want to check this post.

Thanksā€¦ I think you said it should be ok if we are using single GPU. In my case i am using single GPU. Hence, it should work.

hiļ¼Œ, I encountered the same issue with Windows not supporting NCCL. I only want to use a single GPU, but I donā€™t know how to resolve it. Here is the relevant information. Can you provide me with a solution?

Collecting environment information...
PyTorch version: 1.13.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A

OS: Microsoft Windows 10 äø“äøšē‰ˆ
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A

Python version: 3.9.18 (main, Sep 11 2023, 14:09:26) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: True
CUDA runtime version: 12.3.103
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 551.76
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.3\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==1.13.1+cu117
[pip3] torchaudio==0.13.1+cu117
[pip3] torchvision==0.14.1+cu117
[conda] blas                      1.0                         mkl
[conda] cudatoolkit               11.8.0               hd77b12b_0
[conda] mkl                       2023.1.0         h6b88ed4_46358
[conda] mkl-service               2.4.0            py39h2bbff1b_1
[conda] mkl_fft                   1.3.8            py39h2bbff1b_0
[conda] mkl_random                1.2.4            py39h59b6b97_0
[conda] numpy                     1.26.4           py39h055cbcc_0
[conda] numpy-base                1.26.4           py39h65a83cf_0
[conda] pytorch-mutex             1.0                         cpu    pytorch
[conda] torch                     1.13.1+cu117             pypi_0    pypi
[conda] torchaudio                0.13.1+cu117             pypi_0    pypi
[conda] torchvision               0.14.1+cu117             pypi_0    pypi

torch.cuda.nccl.is_available(torch.randn(1).cuda())

D:\anaconda\envs\McQuic_1\lib\site-packages\torch\cuda\nccl.py:15: UserWarning: PyTorch is not compiled with NCCL support
  warnings.warn('PyTorch is not compiled with NCCL support')
False

I have the same issue. This is my output.

Collecting environment informationā€¦
PyTorch version: 2.2.2+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A

OS: Microsoft Windows Server 2019 Standard
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A

Python version: 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.17763-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090

Nvidia driver version: 472.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture=9
CurrentClockSpeed=2101
DeviceID=CPU0
Family=179
L2CacheSize=16384
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2101
Name=Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz
ProcessorType=3
Revision=21767

Architecture=9
CurrentClockSpeed=2101
DeviceID=CPU1
Family=179
L2CacheSize=16384
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2101
Name=Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz
ProcessorType=3
Revision=21767

Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] numpy==1.26.3
[pip3] torch==2.2.2+cu118
[pip3] torchaudio==2.2.2+cu118
[pip3] torchvision==0.17.2+cu118
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6b88ed4_46358
[conda] mkl-service 2.4.0 py310h2bbff1b_1
[conda] mkl_fft 1.3.8 py310h2bbff1b_0
[conda] mkl_random 1.2.4 py310h59b6b97_0
[conda] numpy 1.26.3 py310h055cbcc_0
[conda] numpy-base 1.26.3 py310h65a83cf_0
[conda] torch 2.2.2+cu118 pypi_0 pypi
[conda] torchaudio 2.2.2+cu118 pypi_0 pypi
[conda] torchvision 0.17.2+cu118 pypi_0 pypi

As far as I can tell, this is an Windows issue, isnt it? The line:

torch.cuda.nccl.is_available(torch.randn(1).cuda())

also returns False.

I have the same issue.

python -m torch.utils.collect_env
output : Collecting environment informationā€¦
PyTorch version: 1.8.0+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A

OS: Microsoft Windows Server 2022 Datacenter
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect

Python version: 3.9 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Tesla T4
GPU 1: Tesla T4
GPU 2: Tesla T4
GPU 3: Tesla T4

Nvidia driver version: 551.78
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] numpy==1.23.0
[pip3] torch==1.8.0+cu111
[pip3] torchaudio==2.2.0.dev20240426+cu121
[pip3] torchmetrics==0.8.0
[pip3] torchvision==0.19.0.dev20240426+cu121
[conda] _anaconda_depends 2024.02 py311_mkl_1
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 pypi_0 pypi
[conda] mkl-service 2.4.0 py311h2bbff1b_1
[conda] mkl_fft 1.3.8 py311h2bbff1b_0
[conda] mkl_random 1.2.4 py311h59b6b97_0
[conda] numpy 1.26.4 py311hdab7c0b_0
[conda] numpy-base 1.26.4 py311hd01c5d8_0
[conda] numpydoc 1.5.0 py311haa95532_0
[conda] pytorch 2.2.2 py3.11_cuda11.8_cudnn8_0 pytorch
[conda] pytorch-cuda 11.8 h24eeafa_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.4.0.dev20240421+cu121 pypi_0 pypi
[conda] torchaudio 2.2.0.dev20240421+cu121 pypi_0 pypi
[conda] torchvision 0.17.2 pypi_0 pypi

import torch.cuda.nccl
torch.cuda.nccl.is_available(torch.randn(1).cuda())
output : False

You are running into the same issue as above.