No CUDA GPUs are available

Hi, I’m trying to run a project within a conda env. I have a rtx 3070ti installed in my machine and it seems that the initialization function is causing issues in the program.

Error:
File "sTrain.py", line 37, in <module> torch.cuda.set_device(gpuid) File "/home/user/miniconda3/envs/deeplearningenv/lib/python3.6/site-packages/torch/cuda/__init__.py", line 263, in set_device torch._C._cuda_setDevice(device) File "/home/user/miniconda3/envs/deeplearningenv/lib/python3.6/site-packages/torch/cuda/__init__.py", line 172, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available

Any guidance would be very helpful. I tried uninstalling cudatookit, pytorch, and torchvision and reinstalling with conda install pytorch torchvision cudatoolkit=10.1 but I get the same error. I also the GPU appears in the device manager menu.

I’m not sure if this will aid in finding a solution but, trying torch.cuda.is_available() prints false.

Thank you in advance.

The error points to a missing NVIDIA driver, so you might want to reinstall it.
While the PyTorch pip wheels and conda binaries ship with the CUDA runtime, an NVIDIA driver would still be needed to be able to execute workloads on the GPU.

I’ve been trying that but I’m not having any luck. My card says it’s on cuda 11.4. Is it possible to downgrade to 11.1?

image

It seems the version is correct here, but when I run nvidia-smi I see the cuda version is 11.4

image

It seems you’ve installed a new driver while an older CUDA toolkit was still installed.
Try to compile CUDA examples and execute them to make sure your setup is working fine.
If that’s not the case, uninstall the driver and CUDA toolkit and reinstall it.

1 Like