Torch.cuda.is_available() returns false even after accessing cuda toolkit 11.8

Problem is what the title says.
I installed torch with the following command from the PyTorch website: pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

Following are the relevant packages returned by Conda list:
nvidia-cublas-cu11 11.11.3.6 pypi_0 pypi
nvidia-cuda-cupti-cu11 11.8.87 pypi_0 pypi
nvidia-cuda-nvrtc-cu11 11.8.89 pypi_0 pypi
nvidia-cuda-runtime-cu11 11.8.89 pypi_0 pypi
nvidia-cudnn-cu11 8.7.0.84 pypi_0 pypi
nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
nvidia-curand-cu11 10.3.0.86 pypi_0 pypi
nvidia-cusolver-cu11 11.4.1.48 pypi_0 pypi
nvidia-cusparse-cu11 11.7.5.86 pypi_0 pypi
nvidia-nccl-cu11 2.19.3 pypi_0 pypi
nvidia-nvtx-cu11 11.8.86 pypi_0 pypi
torch 2.2.0+cu118 pypi_0 pypi
torchaudio 2.2.0+cu118 pypi_0 pypi
torchvision 0.17.0+cu118 pypi_0 pypi

And still I get the following output.
python3 -c ‘import torch; print(torch.backends.cudnn.enabled)’
True
python3 -c ‘import torch; print(torch.cuda.is_available())’
False
python3 -c ‘import torch; print(torch.version.cuda)’
11.8

Output of Nvidia-smi is NVIDIA-SMI 525.147.05 Driver Version: 525.147.05 CUDA Version: 12.0

nvcc -v shows version Cuda compilation tools, release 11.8, V11.8.89

I don’t understand why am I still getting False on torch.cuda.is_available()

I cannot reproduce the issue using the latest torch==2.2.0+cu118 wheels:

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
...
Successfully installed nvidia-cublas-cu11-11.11.3.6 nvidia-cuda-cupti-cu11-11.8.87 nvidia-cuda-nvrtc-cu11-11.8.89 nvidia-cuda-runtime-cu11-11.8.89 nvidia-cudnn-cu11-8.7.0.84 nvidia-cufft-cu11-10.9.0.58 nvidia-curand-cu11-10.3.0.86 nvidia-cusolver-cu11-11.4.1.48 nvidia-cusparse-cu11-11.7.5.86 nvidia-nccl-cu11-2.19.3 nvidia-nvtx-cu11-11.8.86 torch-2.2.0+cu118 torchaudio-2.2.0+cu118 torchvision-0.17.0+cu118
...
/workspace# python3 -c 'import torch; print(torch.backends.cudnn.enabled)'
True
/workspace# python3 -c 'import torch; print(torch.cuda.is_available())'
True
/workspace# python3 -c 'import torch; print(torch.version.cuda)'
11.8

In the Successfully installed block, I have a few extra names; I’m not sure where are these coming from and if these could be the reason - mpmath-1.3.0 networkx-3.2.1 sympy-1.12 triton-2.2.0

I am looking for some direction to debug this.