torch.cuda.is_available() is false after CUDA 9.0.176 installed. Could anyone help me with this? Thanks!


I just got a very strange problem:

I used the commands: “print(torch. version )
print(torch.backends.cudnn.version())”, the output were:

0.4.1.post2 (PyTorch version)
9.0.176 (CUDA version)
7.1.2 (cuDNN version)

I also used the “nvcc --version”, the results were:

It was very strange, it seems that the CUDA version installed in Linux system is 9.0.176, and the CUDA that the PyTorch needed is also 9.0.176, but, the “cuda.is_available()” still returns " False ".

Could anyone help me with this? Many thanks!


  • PyTorch Version (e.g., 1.0): 0.4.1.post2
  • OS (e.g., Linux): Linux
  • Python version: 3.6.5 : : Anaconda, Inc.
  • CUDA/cuDNN version: 9.0.176 (CUDA version); 7.1.2 (cuDNN version)

I am facing the same problem and tried all the options discussed here, But did not work. Can somebody please help me to solve this problem.

OS: linux
pytorchVesion: 1.0.1
Python version : 3.7.3
Cuda : 9.0.176
Driver vesion: 396.24

Thank you

Did you install the PyTorch binaries? If so, you don’t need a system CUDA installation, since CUDA already ships with PyTorch?
You can check if with print(torch.version.cuda).

I think your driver might be too old, if your PyTorch binaries shipped with a newer CUDA version than 9.1 (which is most likely the case). Have a look at this table to see the compatibility, and try to update your driver.


@ptrblck Could you please kindly share the link for the PyTorch binaries installation?

Sure! You can find the command to install the binaries here.


Hi @ptrblck,
Thank you for your comments. I was facing similar issue. Can you please suggest what might be wrong for the following setting?

>>> torch.cuda.is_available()
>>> torch.backends.cudnn.enabled
$ cat /usr/local/cuda/version.txt 
CUDA Version 10.0.130

pytorch version 1.6.0

Can’t play with the main system because Its remote PC and I don’t have rights to upgrade or install NVIDIA driver.

Thank you :slight_smile:

I solved this problem by installing cudatoolkit 10.0
I use .yml file with to create conda environment. Now I added following line,

- cudatoolkit=10.0

Conda will select dependency according to this cuda version and hence my pytorch version is also changed, which is ok for me.
Current environment,

>>> torch.cuda.is_available()
>>> torch.backends.cudnn.enabled
$ cat /usr/local/cuda/version.txt 
CUDA Version 10.0.130

pytorch version 1.3.1

Which GPU(s) are you using and how did you install PyTorch 1.6.0?
If you’ve installed the conda binaries or pip wheels, your local CUDA installation will not be used, but binary (or wheel) will ship with their own CUDA, cudnn, (NCCL etc.) libs.
Good to hear you can install PyTorch 1.3.1 with CUDA10.0, but that’s not a proper solution as you would be missing all the new features and bug fixes.

GPU is GeForce GTX 1080 Ti and PyTorch was installed using conda.
Yes I’m using virtual environment, so everything is installed using conda.
Do you mean that conda will install CUDA10.0 that will not disturb the local CUDA installation? In such cases, is it possible to select local CUDA version and bypass conda’s CUDA binary?

I tried PyTorch 1.5 or higher with cudatoolkit=10.0, but there are conflicts with many other packages.

Yes, exactly. The install instructions set you select the CUDA version, which will be installed only in your current environment (conda or pip).
If you get conflicts with other packages, I would recommend to create a new virtual environment, if that’s possible, and reinstall PyTorch as well as all other packages, which are needed.

1 Like


Thank you very much. It’s the same here. PyTorch is good but is clearly messy…

  1. My ENV:
Device 0: "GeForce GTX 1050 Ti"
NVIDIA-SMI 450.57       Driver Version: 450.57       CUDA Version: 11.0
  1. PyTorch Configuration
➜  pytorch python
Python 3.8.2 (default, Jul 16 2020, 14:00:26) 
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda
<module 'torch.cuda' from '/home/jiapei/.local/lib/python3.8/site-packages/torch/cuda/'>
>>> torch.cuda.is_available()
>>> print(torch.version.cuda)
>>> torch.backends.cudnn.enabled
  1. Cuda


➜  ~ cat /usr/local/cuda/version.txt 
CUDA Version 11.0.228

It is unbelievably weird that torch.cuda.is_available() failed to return a reasonable value…

I’m actually testing yolov5, which has the following line:
assert torch.cuda.is_available(), 'CUDA unavailable, invalid device %s requested' % device # check availablity. Clearly, it’s going to fail the assertion.

It seems PyTorch supporting CUDA 11 should be the solution?
hmmmmm any further suggestions?


@peijason As discussed here, I hope PyTorch community will be able to help.

In my case, it turned out that I installed Pytorch without cudatoolkit and print(torch.version.cuda) returned False.