Torch.cuda.is_available() returns False even CUDA is installed

Hello everyone! I experience a problem with pytorch can’t see cuda. Can someone give any suggestions, how to make it work properly? I’m quite new to pytorch.

OS: Windows 10


import torch
print(torch.backends.cudnn.enabled)
 >> True

print(torch.cuda.is_available())
 >> False


!python -m torch.utils.collect_env



Collecting environment information...
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A

OS: Microsoft Windows 10 Pro
GCC version: (x86_64-posix-seh-rev1, Built by MinGW-W64 project) 7.2.0

Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A

Python version: 3.9.7 (default, Sep 16 2021, 16:59:28) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19043-SP0
Is CUDA available: False
CUDA runtime version: 11.7.99

GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080
Nvidia driver version: 516.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.20.3
[pip3] numpydoc==1.1.0
[pip3] torch==1.12.1
[pip3] torchaudio==0.12.1
[pip3] torchvision==0.13.1
[conda] blas                      1.0                         mkl  
[conda] cudatoolkit               11.7.0              ha6f8bbd_10    conda-forge
[conda] mkl                       2021.4.0           haa95532_640  
[conda] mkl-service               2.4.0            py39h2bbff1b_0  
[conda] mkl_fft                   1.3.1            py39h277e83a_0  
[conda] mkl_random                1.2.2            py39hf11a4ad_0  
[conda] numpy                     1.20.3           py39ha4e8547_0  
[conda] numpy-base                1.20.3           py39hc2deb75_0  
[conda] numpydoc                  1.1.0              pyhd3eb1b0_1  
[conda] pytorch                   1.12.1              py3.9_cpu_0    pytorch
[conda] pytorch-mutex             1.0                         cpu    pytorch
[conda] torch                     1.12.1                   pypi_0    pypi
[conda] torchaudio                0.12.1                 py39_cpu    pytorch
[conda] torchvision               0.13.1                   pypi_0    pypi

nvidia-smi
Fri Aug 12 12:08:31 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 516.94       Driver Version: 516.94       CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ... WDDM  | 00000000:01:00.0  On |                  N/A |
| N/A   56C    P8    12W /  N/A |    463MiB /  8192MiB |      2%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      9436    C+G   C:\Windows\explorer.exe         N/A      |
|    0   N/A  N/A      9960    C+G   ...icrosoft VS Code\Code.exe    N/A      |
|    0   N/A  N/A     10480    C+G   ...artMenuExperienceHost.exe    N/A      |
|    0   N/A  N/A     11668    C+G   ...5n1h2txyewy\SearchApp.exe    N/A      |
|    0   N/A  N/A     13736    C+G   ...e\PhoneExperienceHost.exe    N/A      |
|    0   N/A  N/A     14156    C+G   ...2txyewy\TextInputHost.exe    N/A      |
|    0   N/A  N/A     14476    C+G   ...me\Application\chrome.exe    N/A      |
|    0   N/A  N/A     14560    C+G   ...lPanel\SystemSettings.exe    N/A      |
+-----------------------------------------------------------------------------+

nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Jun__8_16:59:34_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.7, V11.7.99
Build cuda_11.7.r11.7/compiler.31442593_0

Thank you!

1 Like

Hi! I am not expert but torch need some specific version of cudnn and cuda.
My setting is python 3.10.4, pytorch 1.12.0, cuda11.6, and cudnn8.0

When you command list of packages, you would see python, cuda, cudnn version like this.

pytorch 1.12.0 py3.10_cuda11.6_cudnn8_0 pytorch

yours shows just cpu

[conda] pytorch 1.12.1 py3.9_cpu_0 pytorch

It’d be better if you check you install proper version of python, cuda and cudnn. :slightly_smiling_face:

conda list returns these related libs:


cuda                      11.7.1                        0    nvidia
cuda-cccl                 11.7.91                       0    nvidia
cuda-command-line-tools   11.7.1                        0    nvidia
cuda-compiler             11.7.1                        0    nvidia
cuda-cudart               11.7.99                       0    nvidia
cuda-cudart-dev           11.7.99                       0    nvidia
cuda-cuobjdump            11.7.91                       0    nvidia
pytorch                   1.12.1              py3.9_cpu_0    pytorch
pytorch-mutex             1.0                         cpu    pytorch
cuda-cupti                11.7.101                      0    nvidia
cuda-cuxxfilt             11.7.91                       0    nvidia
cuda-demo-suite           11.7.91                       0    nvidia
cuda-documentation        11.7.91                       0    nvidia
cuda-libraries            11.7.1                        0    nvidia
cuda-libraries-dev        11.7.1                        0    nvidia
cuda-memcheck             11.7.91                       0    nvidia
cuda-nsight-compute       11.7.1                        0    nvidia
cuda-nvcc                 11.7.99                       0    nvidia
cuda-nvdisasm             11.7.91                       0    nvidia
cuda-nvml-dev             11.7.91                       0    nvidia
cuda-nvprof               11.7.101                      0    nvidia
cuda-nvprune              11.7.91                       0    nvidia
cuda-nvrtc                11.7.99                       0    nvidia
cuda-nvrtc-dev            11.7.99                       0    nvidia
cuda-nvtx                 11.7.91                       0    nvidia
cuda-nvvp                 11.7.101                      0    nvidia
cuda-python               11.7.1                   pypi_0    pypi
cuda-runtime              11.7.1                        0    nvidia
cuda-sanitizer-api        11.7.91                       0    nvidia
cuda-toolkit              11.7.1                        0    nvidia
cuda-tools                11.7.1                        0    nvidia
cuda-visual-tools         11.7.1                        0    nvidia
cudatoolkit               11.7.0              ha6f8bbd_10    conda-forge

Are you pointing to the right cuda install? On Linux you get issues emerging from not setting your environment variables to use the correct cuda installs, there might be an equivalent behavior on Windows.

What GPU do you have? Can you check your current cuda driver to make sure it supports running cuda 11.7? (you can do this via the nvidia-smi command)

I am using GTX 1080

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      9436    C+G   C:\Windows\explorer.exe         N/A      |
|    0   N/A  N/A      9960    C+G   ...icrosoft VS Code\Code.exe    N/A      |
|    0   N/A  N/A     10480    C+G   ...artMenuExperienceHost.exe    N/A      |
|    0   N/A  N/A     11668    C+G   ...5n1h2txyewy\SearchApp.exe    N/A      |
|    0   N/A  N/A     13736    C+G   ...e\PhoneExperienceHost.exe    N/A      |
|    0   N/A  N/A     14156    C+G   ...2txyewy\TextInputHost.exe    N/A      |
|    0   N/A  N/A     14476    C+G   ...me\Application\chrome.exe    N/A      |
|    0   N/A  N/A     14560    C+G   ...lPanel\SystemSettings.exe    N/A      |
+-----------------------------------------------------------------------------+

[/quote]

What CUDA version and driver version does nvidia-smi show? There’s an example of it here.

nvidia-smi
Fri Aug 12 12:08:31 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 516.94       Driver Version: 516.94       CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ... WDDM  | 00000000:01:00.0  On |                  N/A |
| N/A   56C    P8    12W /  N/A |    463MiB /  8192MiB |      2%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

[/quote]

@Rhinestone Was this ever resolved?
I have this same exact issue even though all the dependencies are met. I installed pytorch using the following command (which I got from the pytorch installation website here:

conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia

Here are the debug logs:

>> python -c 'import torch; print(torch.backends.cudnn.enabled)'
True

>>  python -c 'import torch; print(torch.cuda.is_available())'
False

>> python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 1.13.0
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35

Python version: 3.10.6 (main, Oct 24 2022, 16:07:47) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] numpy==1.23.4
[pip3] torch==1.13.0
[pip3] torch-hd==3.4.0
[pip3] torchaudio==0.13.0
[pip3] torchmetrics==0.10.1
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.14.0
[conda] blas                      1.0                         mkl
[conda] cudatoolkit               11.3.1               h2bc3f7f_2
[conda] ffmpeg                    4.3                  hf484d3e_0    pytorch
[conda] mkl                       2021.4.0           h06a4308_640
[conda] mkl-service               2.4.0           py310h7f8727e_0
[conda] mkl_fft                   1.3.1           py310hd6ae3a3_0
[conda] mkl_random                1.2.2           py310h00e6091_0
[conda] numpy                     1.23.4                   pypi_0    pypi
[conda] numpy-base                1.23.3          py310h8e6c178_1
[conda] pytorch                   1.13.0             py3.10_cpu_0    pytorch
[conda] pytorch-cuda              11.7                 h67b0de4_0    pytorch
[conda] pytorch-mutex             1.0                         cpu    pytorch
[conda] torch-hd                  3.4.0                    pypi_0    pypi
[conda] torchaudio                0.13.0                py310_cpu    pytorch
[conda] torchhd                   3.4.0                      py_0    torchhd
[conda] torchmetrics              0.10.1                   pypi_0    pypi
[conda] torchsummary              1.5.1                    pypi_0    pypi
[conda] torchvision               0.14.0                   pypi_0    pypi

Finally, the output of nvidia-smi:

>> nvidia-smi
Sun Nov  6 15:58:56 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01    Driver Version: 515.65.01    CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:07:00.0 Off |                  N/A |
|  0%   36C    P8    16W / 320W |     98MiB / 10240MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------
...

>> nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Cuda compilation tools, release 11.7, V11.7.99
Build cuda_11.7.r11.7/compiler.31442593_0

Any update on this ?

I made a completly fresh install:

conda create -n myenv python=3.8
conda activate myenv
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia

and still have the issue:

>> python3.8 -c 'import torch; print(torch.backends.cudnn.enabled)'
True
>> python3.8 -c 'import torch; print(torch.cuda.is_available())'
/path_to/myenv/lib/python3.8/site-packages/torch/cuda/__init__.py:88: UserWarning: CUDA initialization: CUDA driver initialization failed, you might not have a CUDA gpu. (Triggered internally at /opt/conda/conda-bld/pytorch_1666642975312/work/c10/cuda/CUDAFunctions.cpp:109.)
  return torch._C._cuda_getDeviceCount() > 0

Some useful command outputs:

>> nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.57       Driver Version: 515.57       CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA A100 80G...  Off  | 00000000:CA:00.0 Off |                   On |
| N/A   38C    P0    65W / 300W |      0MiB / 81920MiB |     N/A      Default |
|                               |                      |              Enabled |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| MIG devices:                                                                |
+------------------+----------------------+-----------+-----------------------+
| GPU  GI  CI  MIG |         Memory-Usage |        Vol|         Shared        |
|      ID  ID  Dev |           BAR1-Usage | SM     Unc| CE  ENC  DEC  OFA  JPG|
|                  |                      |        ECC|                       |
|==================+======================+===========+=======================|
|  No MIG devices found                                                       |
+-----------------------------------------------------------------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
>> nvcc -V
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Jun__8_16:49:14_PDT_2022
Cuda compilation tools, release 11.7, V11.7.99
Build cuda_11.7.r11.7/compiler.31442593_0

Finally:

>> python3.8 -m torch.utils.collect_env
Collecting environment information...
/path_to/myenv/lib/python3.8/site-packages/torch/cuda/__init__.py:88: UserWarning: CUDA initialization: CUDA driver initialization failed, you might not have a CUDA gpu. (Triggered internally at /opt/conda/conda-bld/pytorch_1666642975312/work/c10/cuda/CUDAFunctions.cpp:109.)
  return torch._C._cuda_getDeviceCount() > 0
PyTorch version: 1.13.0
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A

OS: Scientific Linux release 7.9 (Nitrogen) (x86_64)
GCC version: (GCC) 6.3.0
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.17

Python version: 3.8.5 | packaged by conda-forge | (default, Sep 24 2020, 16:55:52)  [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.76.1.el7.x86_64-x86_64-with-glibc2.10
Is CUDA available: False
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe
Nvidia driver version: 515.57
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] numpy==1.23.4
[pip3] torch==1.13.0
[pip3] torchaudio==0.13.0
[pip3] torchvision==0.14.0
[conda] blas                      1.0                         mkl
[conda] ffmpeg                    4.3                  hf484d3e_0    pytorch
[conda] mkl                       2021.4.0           h06a4308_640
[conda] mkl-service               2.4.0            py38h7f8727e_0
[conda] mkl_fft                   1.3.1            py38hd3c417c_0
[conda] mkl_random                1.2.2            py38h51133e4_0
[conda] numpy                     1.23.4           py38h14f4228_0
[conda] numpy-base                1.23.4           py38h31eccc5_0
[conda] pytorch                   1.13.0          py3.8_cuda11.7_cudnn8.5.0_0    pytorch
[conda] pytorch-cuda              11.7                 h67b0de4_0    pytorch
[conda] pytorch-mutex             1.0                        cuda    pytorch
[conda] torchaudio                0.13.0               py38_cu117    pytorch
[conda] torchvision               0.14.0               py38_cu117    pytorch

Any help would be much appreciated !

I have just tested with pytorch-cuda=11.6 and the issue is still there (I have now CUDA runtime version: 11.6.124)

Looks like you are running Linux. It might be a NVIDIA linux driver issue. Sometimes after the driver is updated, PyTorch may run into issues with the new driver. It might be redundant as you might have tried this already, have you restarted your machine after running an update packages command? Another thing to explore might be to downgrade the NVIDIA driver or PyTorch (if you have updated either of these) and see if the problem persists. Regards.

@tiramisuNcustard I tried to reboot the machine but nothing changed.

In my case, disabling the MIG mode of my GPU solved the issue:

>> nvidia-smi -mig 0

I did a fresh reinstall after disabling it, everything works fine.

I just wanted to point this out as it doesn’t seem you’ve created any MIG devices.
Generally, MIG will work, but you would have to stick to the user guide and create the desired devices etc.

1 Like

I have the same issue,

my config :
Windows 10 22H2
pytorch was installed by command :
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia

modelscope38) C:\Users\Alexey\modelscope-text-to-video-synthesis>python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A

OS: Microsoft Windows 10 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A

Python version: 3.8.16 (default, Mar  2 2023, 03:18:16) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: False
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA RTX A4000
Nvidia driver version: 531.18
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture=9
CurrentClockSpeed=3701
DeviceID=CPU0
Family=107
L2CacheSize=6144
L2CacheSpeed=
Manufacturer=AuthenticAMD
MaxClockSpeed=3701
Name=AMD Ryzen 9 5900X 12-Core Processor
ProcessorType=3
Revision=8450

Versions of relevant libraries:
[pip3] numpy==1.23.5
[pip3] open-clip-torch==2.16.0
[pip3] pytorch-lightning==1.7.7
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.1+cu117
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.15.1+cu117
[conda] blas                      1.0                         mkl
[conda] mkl                       2021.4.0           haa95532_640
[conda] mkl-service               2.4.0            py38h2bbff1b_0
[conda] mkl_fft                   1.3.1            py38h277e83a_0
[conda] mkl_random                1.2.2            py38hf11a4ad_0
[conda] numpy                     1.23.5           py38h3b20f71_0
[conda] numpy-base                1.23.5           py38h4da318b_0
[conda] open-clip-torch           2.16.0                   pypi_0    pypi
[conda] pytorch                   2.0.0           py3.8_cuda11.7_cudnn8_0    pytorch
[conda] pytorch-cuda              11.7                 h16d0643_3    pytorch
[conda] pytorch-lightning         1.7.7                    pypi_0    pypi
[conda] pytorch-mutex             1.0                        cuda    pytorch
[conda] torch                     2.0.0+cu117              pypi_0    pypi
[conda] torchaudio                2.0.0                    pypi_0    pypi
[conda] torchmetrics              0.11.4                   pypi_0    pypi
[conda] torchvision               0.15.1+cu117             pypi_0    pypi

nvcc version:

C:\Users\Alexey>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Jun__8_16:59:34_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.7, V11.7.99
Build cuda_11.7.r11.7/compiler.31442593_0

nvidia-smi :

C:\Users\Alexey>nvidia-smi
Mon Mar 20 21:52:49 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 531.18                 Driver Version: 531.18       CUDA Version: 12.1     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                      TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf            Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA RTX A4000              WDDM | 00000000:26:00.0  On |                  Off |
| 41%   41C    P8               16W / 140W|    479MiB / 16376MiB |      9%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|

Also i have tried to reinstall CUDA toolkit , reset system and problem still there. I have checked env variable - looks good

I have this problem with my RTX 3060. In my case, I just restart my pc and it helps

1 Like

Try to do

python -c "import torch;torch.zeros(1).cuda()

This will force Torch to put a tensor into CUDA and trigger actual warning message.

Chance are, it’s some WSL GPU driver problem, which can be resolved using Killed while loading pygmalion-6b_dev, GPT-J and other recent models · Issue #440 · oobabooga/text-generation-webui · GitHub or libcuda.so.1 is not a symbolic link · Issue #5548 · microsoft/WSL · GitHub

1 Like

I tried that and I got this error:
File “”, line 1, in
File “C:\Users\Valerio Cadura\AppData\Roaming\Python\Python310\site-packages\torch\cuda_init_.py”, line 239, in _lazy_init
raise AssertionError(“Torch not compiled with CUDA enabled”)
AssertionError: Torch not compiled with CUDA enabled

Did you reach a solution? I’m still stuck. I’m having an RTX 3050, yet it shows the same warning…

try to restart your pc, it can help sometimes