I am aware variations of this have been asked multiple times, but even after working through many of those, I’m still stuck. I’m trying to get pytorch with CUDA support running on my Laptop. However, torch.cuda.is_available() returns False. Selected system information and diagnostic outputs are as follows:
Lenovo ThinkPad P14S Gen4
NVIDIA RTX A500 Laptop GPU
Linux Kernel 6.11.11-1
NVIDIA Driver Version: 550.135
nvidia-smi output:
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.135 Driver Version: 550.135 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA RTX A500 Laptop GPU Off | 00000000:03:00.0 Off | N/A |
| N/A 42C P0 7W / 30W | 8MiB / 4096MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
nvcc --version:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Tue_Oct_29_23:50:19_PDT_2024
Cuda compilation tools, release 12.6, V12.6.85
Build cuda_12.6.r12.6/compiler.35059454_0
torch.utils.collect_env:
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240910
Clang version: 18.1.8
CMake version: version 3.31.2
Libc version: glibc-2.40
Python version: 3.12.7 (main, Oct 1 2024, 11:15:50) [GCC 14.2.1 20240910] (64-bit runtime)
Python platform: Linux-6.11.11-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: False
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA RTX A500 Laptop GPU
Nvidia driver version: 550.135
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.5.1
/usr/lib/libcudnn_adv.so.9.5.1
/usr/lib/libcudnn_cnn.so.9.5.1
/usr/lib/libcudnn_engines_precompiled.so.9.5.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.5.1
/usr/lib/libcudnn_graph.so.9.5.1
/usr/lib/libcudnn_heuristic.so.9.5.1
/usr/lib/libcudnn_ops.so.9.5.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
I’ve also tried using a both a venv and a conda env running pytorch 2.5.1 compiled against CUDA 12.4 with basically the same result.
Not that it should make any difference, but CUDA is both in my PATH and my LD_LIBRARY_PATH.
As far as I understand the whole setup, versions should be matching and I really don’t understand what’s going wrong. Please let me know if you need any additional information!
Edit: I’ve also posted this question to StackOverflow (pytorch - CUDA not available - Stack Overflow) and will comment any solutions found there.