Torch.cuda.is_available() returns False for CUDA 12.3

the first line from output of nvidia-smi command is:

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.08      Driver Version: 545.23.08    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------

I have NVIDIA L40S GPU

I am trying to install pytorch but failed so far with everything I tries I always get False from
torch.cuda.is_available()

Can anybody help?

Thanks

You might have installed the CPU-only PyTorch binary and should make sure the right PyTorch binary shipping with CUDA dependencies was installed via pip list | grep torch and print(torch.version.cuda).
If that’s not the case, your system might have an issue with the NVIDIA driver and you might need to reinstall it.

@ptrblck Thanks for quick reply
Following is the output from pip list | grep torch

torch 2.3.0
torchaudio 2.3.0
torchvision 0.18.0

output from print(torch.version.cuda):

12.1

Thanks It works now. I was actually not having access to GPU in my environment, after turning it on it works.

Could you describe what exactly this means and how you’ve “turned on” your GPU?

I initially used a CPU-only server, that’s why torch.cuda.is_available() returned false. After enabling the GPU, it started working.