I have NVIDIA-SMI 560.35.03, Driver Version: 560.35.03, CUDA 12.6 installed in the server. Python version is 3.9.18. But I tried installing torch version 2.5 + cu124; 2.4 + cu121. It’s all return torch.cuda.is_available() is false. ps: torch.backends.cudnn.enabled is true
Your NVIDIA driver might not be properly installed if PyTorch cannot communicate with the GPU assuming you’ve installed a CDUA-enabled PyTorch binary (you can check it e.g. via torch.version.cuda).
I checked print(“Torch version:”, torch.version); print(“CUDA version:”, torch.version.cuda) and print(“CUDA available:”, torch.cuda.is_available()). The third one return False.
In this case your system seems to have trouble communicating with the device which could happen e.g. if you’ve updated the driver without a restart. Recently, another user explained they were seeing this issue after letting their device sleep and needed to reset the GPU or restart their system.
As far as I understood pytorch installs its needed cuda version indipentenly. So, in short, there is no need to downgrade.
I messed up my system so many times, that I would not try to downgrade the driver.
But try at least the following: Install only pytorch with the recipe above without torchaudio and torchvision and see what happens.
So: $pip3 install torch
or $conda install pytorch pytorch-cuda=12.4 -c pytorch -c nvidia