Torch.cuda.is_available() returns False even CUDA is installed

Tried it several times but to no avail…

What is the output of nvidia-smi and how did you install PyTorch?

I would recommend trying the pip wheel install e.g., via
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

I had the same issue. Tried everything but nothing solved the problem. When I was exploring the PATH to see CUDA is set properly I realized that the first option CUDA_VISIBLE_DEVICES is set to 1. I changed it to 0 and the problem was solved.
I hope this helps you.

2 Likes

Thank you so much for your response. Buy the way I have solved this issue by installing CUDA toolkit inside my conda venv by command conda install -c anaconda cudatoolkit (Cudatoolkit :: Anaconda.org). So it looks like pytorch that was installed inside my conda environment didn’t see path to system installed CUDA toolkit and use CUDA toolkit installed inside venv/

Another solution when i had the problem was solved when i installed this package: ```
nvidia-ml-py3==7.352.0

I have also facing same issue while install through conda but its resolved through install the PyTorch using pip3 or pip

I am running PyTorch in docker. My problem was that I wasn’t passing the runtime and gpu options when starting the docker container. docker run -it <image_id> resulted in this behavior. docker run -it --runtime=nvidia --gpus all <image_id> fixed it.