I also used the “nvcc --version”, the results were:
It was very strange, it seems that the CUDA version installed in Linux system is 9.0.176, and the CUDA that the PyTorch needed is also 9.0.176, but, the “cuda.is_available()” still returns " False ".
Did you install the PyTorch binaries? If so, you don’t need a system CUDA installation, since CUDA already ships with PyTorch?
You can check if with print(torch.version.cuda).
I think your driver might be too old, if your PyTorch binaries shipped with a newer CUDA version than 9.1 (which is most likely the case). Have a look at this table to see the compatibility, and try to update your driver.
Which GPU(s) are you using and how did you install PyTorch 1.6.0?
If you’ve installed the conda binaries or pip wheels, your local CUDA installation will not be used, but binary (or wheel) will ship with their own CUDA, cudnn, (NCCL etc.) libs.
Good to hear you can install PyTorch 1.3.1 with CUDA10.0, but that’s not a proper solution as you would be missing all the new features and bug fixes.
GPU is GeForce GTX 1080 Ti and PyTorch was installed using conda.
Yes I’m using virtual environment, so everything is installed using conda.
Do you mean that conda will install CUDA10.0 that will not disturb the local CUDA installation? In such cases, is it possible to select local CUDA version and bypass conda’s CUDA binary?
I tried PyTorch 1.5 or higher with cudatoolkit=10.0, but there are conflicts with many other packages.
Yes, exactly. The install instructions set you select the CUDA version, which will be installed only in your current environment (conda or pip).
If you get conflicts with other packages, I would recommend to create a new virtual environment, if that’s possible, and reinstall PyTorch as well as all other packages, which are needed.
➜ pytorch python
Python 3.8.2 (default, Jul 16 2020, 14:00:26)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda
<module 'torch.cuda' from '/home/jiapei/.local/lib/python3.8/site-packages/torch/cuda/__init__.py'>
>>> torch.cuda.is_available()
False
>>> print(torch.version.cuda)
11.0
>>> torch.backends.cudnn.enabled
True
>>>
Cuda
and:
➜ ~ cat /usr/local/cuda/version.txt
CUDA Version 11.0.228
It is unbelievably weird that torch.cuda.is_available() failed to return a reasonable value…
I’m actually testing yolov5, which has the following line: assert torch.cuda.is_available(), 'CUDA unavailable, invalid device %s requested' % device # check availablity. Clearly, it’s going to fail the assertion.
It seems PyTorch supporting CUDA 11 should be the solution?
hmmmmm any further suggestions?