I’ve added an GeForce GTX 1080 Ti into my machine (Running Ubuntu 18.04 and Anaconda with Python 3.7) to utilize the GPU when using PyTorch. Both cards a correctly identified:
$ lspci | grep VGA
03:00.0 VGA compatible controller: NVIDIA Corporation GF119 [NVS 310] (reva1)
04:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1)
The NVS 310 handles my 2-monitor setup, I only want to utilize the 1080 for PyTorch. I also installed the latest NVIDIA drivers that are currently in the repository and that seems to be fine:
$ nvidia-smi
Sat Jan 19 12:42:18 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.87 Driver Version: 390.87 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 NVS 310 Off | 00000000:03:00.0 N/A | N/A |
| 30% 60C P0 N/A / N/A | 461MiB / 963MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 108... Off | 00000000:04:00.0 Off | N/A |
| 0% 41C P8 10W / 250W | 2MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
Driver version 390.xx allows to run CUDA 9.1 (9.1.85) according the the NVIDIA docs. Since this is also the version in the Ubuntu repositories, I simple installed the CUDA Toolkit with:
$ sudo apt-get-installed nvidia-cuda-toolkit
And again, this seems be alright:
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Nov__3_21:07:56_CDT_2017
Cuda compilation tools, release 9.1, V9.1.85
and
$ apt-cache policy nvidia-cuda-toolkit
nvidia-cuda-toolkit:
Installed: 9.1.85-3ubuntu1
Candidate: 9.1.85-3ubuntu1
Version table:
*** 9.1.85-3ubuntu1 500
500 http://sg.archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages
100 /var/lib/dpkg/status
Lastly, I’ve installed PyTorch from scratch with conda
conda install pytorch torchvision -c pytorch
Also error as far as I can tell:
$ conda list
...
pytorch 1.0.0 py3.7_cuda9.0.176_cudnn7.4.1_1 pytorch
...
However, PyTorch doesn’t seem to find CUDA:
$ python -c 'import torch; print(torch.cuda.is_available())'
False
In more detail, if I force PyTorch to convert a tensor x
to CUDA with x.cuda()
I get the error:
Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from 82 http://...
What am I’m missing here? I’m new to this, but I think I’ve checked the Web already quite a bit to find any caveats like NVIDIA driver and CUDA toolkit versions?
EDIT: Some more outputs from PyTorch:
print(torch.cuda.device_count()) # --> 0
print(torch.cuda.is_available()) # --> False
print(torch.version.cuda) # --> 9.0.176