Torch.cuda.is_available() returns False with CUDA 12.6

I’ve combed through a few forum posts on this topic, but none of the solutions I’ve seen have worked. I installed PyTorch to my environment with the following command: pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124.

Output from ‘torch.__version__’:

2.4.0+cpu

Output from ‘nvcc --version’:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Tue_Feb_27_16:28:36_Pacific_Standard_Time_2024
Cuda compilation tools, release 12.4, V12.4.99
Build cuda_12.4.r12.4/compiler.33961263_0

Output from ‘nvidia-smi’:

+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.94                 Driver Version: 560.94         CUDA Version: 12.6     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                  Driver-Model | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3050 ...  WDDM  |   00000000:01:00.0 Off |                  N/A |
| N/A   45C    P0              9W /   40W |       0MiB /   4096MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

I do not know why one of these shows 12.4 and the other shows 12.6.

I tried a fresh install of both torch and of the CUDA toolkit, neither of which had any effect. I saw in one forum post that rolling torch’s supported CUDA version back to 12.1 if you have 12.6 installed, but this did not work either. Am I missing something obvious?

You have installed the CPU-only binary and should install the PyTorch binary with CUDA support instead.

Your locally CUDA toolkit won’t be used unless you build PyTorch from source or a custom CUDA extension.

This is what I’m confused about, I use the command provided in the ‘Start Locally’ portion of the PyTorch page, which yields the following on completion:

Successfully installed torch-2.4.1+cu124 torchaudio-2.4.1+cu124 torchvision-0.19.1+cu124
However, when I run the program again, I get the same ‘2.4.0+cpu’ for version.

Is the toolkit interfering with torch in some way?

You’ve most likely installed multiple PyTorch binaries and the CPU-only one is being picked. Uninstall all previous PyTorch installations and reinstall the latest release with CUDA only.

No, since the PyTorch binary itself shows it does not have CUDA support: 2.4.0+cpu.

That was the problem, thank you!

did it worked i have the same issue

i have similar problem.

output from ‘torch.version’:

2.6.0.dev20241107+cu124

output from ‘nvcc --version’

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Sep_12_02:18:05_PDT_2024
Cuda compilation tools, release 12.6, V12.6.77
Build cuda_12.6.r12.6/compiler.34841621_0

output from ‘nvidia-smi’:

I think there is a mismatch between PyTorch and cuda versions. But I couldn’t find PyTorch version for 12.6.

This is irrelevant as already explained:

Thanks for the information. However, if it does not work even though I downloaded PyTorch from official source, could the problem be related to the Ubuntu version of my system? (my ubuntu version is 24.04)

No, since Ubuntu 24.04 is working fine for me.
I don’t know what kind of issues you are seeing, but in case the right PyTorch binary was installed but cannot communicate with the GPU, check if the NVIDIA drivers were properly installed.

1 Like

I’m sure I installed the nvidia drivers properly. The outputs of the “nvdia-smi” and “nvcc --version” commands match the versions I installed. But PyTorch cannot see these drivers.

Update: Torch 2.5.0+cu124 runs successfully on my current system (ubuntu 24.04).