Torch not compiled with CUDA enabled

I am trying to use PyTorch for the first time with Pycharm. When trying to use cuda, it is showing me this error

Traceback (most recent call last):
  File "C:/Users/omara/PycharmProjects/test123/test.py", line 4, in <module>
    my_tensor = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.float32, device="cuda")
  File "C:\Users\omara\anaconda3\envs\deeplearning\lib\site-packages\torch\cuda\__init__.py", line 166, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

I already installed cuda toolkit using the pytorch command in the anaconda

(deeplearning) C:\WINDOWS\system32>conda info

     active environment : deeplearning
    active env location : C:\Users\omara\anaconda3\envs\deeplearning
            shell level : 2
       user config file : C:\Users\omara\.condarc
 populated config files : C:\Users\omara\.condarc
          conda version : 4.9.2
    conda-build version : 3.20.5
         python version : 3.8.5.final.0
       virtual packages : __cuda=11.2=0
                          __win=0=0
                          __archspec=1=x86_64
       base environment : C:\Users\omara\anaconda3  (writable)
           channel URLs : https://repo.anaconda.com/pkgs/main/win-64
                          https://repo.anaconda.com/pkgs/main/noarch
                          https://repo.anaconda.com/pkgs/r/win-64
                          https://repo.anaconda.com/pkgs/r/noarch
                          https://repo.anaconda.com/pkgs/msys2/win-64
                          https://repo.anaconda.com/pkgs/msys2/noarch
          package cache : C:\Users\omara\anaconda3\pkgs
                          C:\Users\omara\.conda\pkgs
                          C:\Users\omara\AppData\Local\conda\conda\pkgs
       envs directories : C:\Users\omara\anaconda3\envs
                          C:\Users\omara\.conda\envs
                          C:\Users\omara\AppData\Local\conda\conda\envs
               platform : win-64
             user-agent : conda/4.9.2 requests/2.24.0 CPython/3.8.5 Windows/10 Windows/10.0.19041
          administrator : True
             netrc file : None
           offline mode : False

how did you install pytorch. Did you use the correct install. This is the pip install: pip install torch===1.7.1+cu110 torchvision===0.8.2+cu110 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html

I used
conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch

the torch library is working, if I just use device=cpu instead of device=cuda, then I don’t get any error

import torch

print(torch.__version__)
my_tensor = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.float32, device="cpu")
print(my_tensor)
torch.cuda.is_available()

what happens when you run nvidia-smi and when you print torch.cuda.is_available. What does the pytorch version print out too.

1 Like

Thank you, Dwight for trying to help

print(torch.cuda.is_available())

output false for me

but running nvidia-smi from the anaconda prompt showing I have CUDA Version 11.2

(deeplearning) C:\WINDOWS\system32>nvidia-smi
Sun Feb 21 10:06:38 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.89       Driver Version: 460.89       CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce RTX 2060   WDDM  | 00000000:01:00.0 Off |                  N/A |
| N/A   40C    P8     9W /  N/A |    164MiB /  6144MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

and when I run conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch

it is showing me, all requested packages already installed

I tried to uninstall CUDA and Pytorch and install them again but nothing changed

I created a new environment, and it is working well. I don’t really know what the exact problem is, but it is solved.
Thank you for your assistance, Dwight :slight_smile:

1 Like

No problem glad you got it to work.

1 Like

I have print(torch.cuda.is_available()) = False after install torchgeometry. My PyTorch worked well until I installed it. I will update if I find any solution.

Could you check, if torchgeometry might have uninstalled your previous PyTorch installation and installed a CPU-only version instead? During the install step of torchgeometry the logs should indicate this and you might want to install it via pip install ... --no-dependencies or change the requirement for this package.

1 Like

Yes, you are right. I installed the wrong version (CPU-only version).

Hi @timmyvg , how did you uninstall pytorch cpu only and installed a CUDA-enabled pytorch?

In the end I switched from Conda to virtualenv and it worked at the first try.

I created my virtualenv with virtualenv virtualenv_name

Then I did

workon virtualenv_name

then, I installed pytorch as it is specified on the official pytorch website (but selecting pip instead of conda) as package manager (Start Locally | PyTorch).

conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch

This worked for me and now I have a CUDA-enabled version of pytorch on my machine.

3 Likes

For 2022 readers, please go to the official PyTorch website found here and select the appropriate choices in the table they provide. Copy and paste the auto-generated command provided, which will uninstall existing torch/torchvision/torchaudio versions and install the CUDA enabled versions.

If you are working in a conda environment, please remove existing conda install torch versions before adding packages to pip.

In my case,
conda install python=3.10 pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch failed.
Eventually, I installed successfully by adding -c nvidia to the above, resulting in
conda install python=3.10 pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch -c nvidia

Hi I have tried all the solutions listed above; but still not able to resolve the error.

  • Tried creating new environment in Conda
  • Tried installing using both conda and pip
  • Tried installing both versions 11.7 and 11.8

Could you please suggest any other alternatives. ( I am not familiar with virtualenv)

Could you show some install logs of e.g the attempt to install the current 2.0.0+cu117 pip wheel in an empty and new environment, please?
The log could give us a clue if e.g. pip is unable to find the right wheel if your Python version is too old or any other issue occurs.

Hi I have got a new laptop with RTX 4060 with CUDA 12.0. Realized that PyTorch does not provide support for CUDA 12.0. But the only way we can run is using a Docker container - PyTorch | NVIDIA NGC. Could you please suggest any alternative approaches. I am new to PyTorch and is there an easier way to get this working.

The PyTorch binaries ship with their own CUDA runtime and CUDA libraries (such as cuBLAS, cuDNN, NCCL, etc.). Your locally installed CUDA toolkit will be used if you build PyTorch from source or custom CUDA extensions. For your 4060 you can install the current stable or nightly PyTorch binaries with CUDA 11.8.

I installed the cuda toolkit 11.8 and then installed the PyTorch 2.0 for cuda 11.8 ( stable version). Still getting the same error.

When I run nvidia-smi - I still get the cuda version as 12.0. How is it related to the cuda-toolkit 11.8 that I installed.

Could you post the used pip/conda install command as well as the output, since it should show which PyTorch binary is installed and should show the CPU-only binaries.

nvidia-smi returns the driver version and the CUDA version corresponding to this driver.

Assuming you have installed the cuda-toolkit conda binary, the output of nvidia-smi won’t relate to it. Also, why do you install it instead of using the provided commands?