I have been trying to get pytorch to recognize my gpu on Pop_OS! for over a month now with no success. I’m trying to get started learning ML/AI as a beginner but have been facing this hurdle for a while now.
I’ve looked into many threads that seemed relevant on this forum, and many other places. I’ve tried debugging the issue with the help of AI, too, to no avail. I’m having this issue both on my laptop (Nvidia 4070) and desktop (Nvidia 3070), both running Pop_OS and the latest nvidia driver provided by system76 (570.133.07).
When trying to check if torch recognizes the gpu I get:
Blockquote
print(torch.cuda.is_available())
/home/alikebrahim/ComfyUI/.venv/lib/python3.12/site-packages/torch/cuda/init.py:174: UserWarning: CUDA initialization: CUDA driver initialization failed, you might not have a CUDA gpu. (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:109.)
return torch._C._cuda_getDeviceCount() > 0
False
Can someone help guide me how to debug this and sort it out, please. I’ve tried with different versions of python, torch and even used different venv initializers (uv, virtualenv and conda)!