Problem is what the title says.
I installed torch with the following command from the PyTorch website: pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
Following are the relevant packages returned by Conda list:
nvidia-cublas-cu11 11.11.3.6 pypi_0 pypi
nvidia-cuda-cupti-cu11 11.8.87 pypi_0 pypi
nvidia-cuda-nvrtc-cu11 11.8.89 pypi_0 pypi
nvidia-cuda-runtime-cu11 11.8.89 pypi_0 pypi
nvidia-cudnn-cu11 8.7.0.84 pypi_0 pypi
nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
nvidia-curand-cu11 10.3.0.86 pypi_0 pypi
nvidia-cusolver-cu11 11.4.1.48 pypi_0 pypi
nvidia-cusparse-cu11 11.7.5.86 pypi_0 pypi
nvidia-nccl-cu11 2.19.3 pypi_0 pypi
nvidia-nvtx-cu11 11.8.86 pypi_0 pypi
torch 2.2.0+cu118 pypi_0 pypi
torchaudio 2.2.0+cu118 pypi_0 pypi
torchvision 0.17.0+cu118 pypi_0 pypi
And still I get the following output.
python3 -c ‘import torch; print(torch.backends.cudnn.enabled)’
True
python3 -c ‘import torch; print(torch.cuda.is_available())’
False
python3 -c ‘import torch; print(torch.version.cuda)’
11.8
Output of Nvidia-smi is NVIDIA-SMI 525.147.05 Driver Version: 525.147.05 CUDA Version: 12.0
nvcc -v shows version Cuda compilation tools, release 11.8, V11.8.89
I don’t understand why am I still getting False on torch.cuda.is_available()