I finally installed CUDA 9.0 and PyTorch 1.0 from *.whl file to make it work on GTX 1070 ( torch.cuda.is_available returns True).
I want to see if the installation itself was with cuda not if cuda is available. How do I check that?
seems conda list
works:
pytorch 1.7.0 py3.8_cuda10.2.89_cudnn7.6.5_0 pytorch
though I wonder why pytorch things there is no gpu…
(automl-meta-learning) miranda9~/automl-meta-learning $ nvidia-smi
Wed Dec 2 10:25:39 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.36.06 Driver Version: 450.36.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 TITAN Xp Off | 00000000:02:00.0 Off | N/A |
| 53% 83C P2 256W / 250W | 9121MiB / 12196MiB | 89% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 TITAN Xp Off | 00000000:03:00.0 Off | N/A |
| 45% 70C P2 70W / 250W | 4041MiB / 12196MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 TITAN Xp Off | 00000000:82:00.0 Off | N/A |
| 31% 45C P8 12W / 250W | 2MiB / 12196MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 TITAN Xp Off | 00000000:83:00.0 Off | N/A |
| 32% 46C P8 12W / 250W | 2MiB / 12196MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 32361 C python 9119MiB |
| 1 N/A N/A 24301 C python 4039MiB |
+-----------------------------------------------------------------------------+
(automl-meta-learning) miranda9~/automl-meta-learning $ python
Python 3.8.2 (default, Mar 26 2020, 15:53:00)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
torch>>> torch.cuda.is_available()
False
I will try to see if updating pytorch to use cuda 11 works since that seems to be a mistmatch.
update
that didn’t work…
pytorch 1.7.0 py3.8_cuda11.0.221_cudnn8.0.3_0 pytorch
...
torchvision 0.8.1 py38_cu110 pytorch
...
(automl-meta-learning) miranda9~/automl-meta-learning $ nvidia-smi
Wed Dec 2 10:38:47 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.36.06 Driver Version: 450.36.06 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 TITAN Xp Off | 00000000:02:00.0 Off | N/A |
| 53% 83C P2 255W / 250W | 9121MiB / 12196MiB | 97% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 TITAN Xp Off | 00000000:03:00.0 Off | N/A |
| 49% 79C P2 244W / 250W | 4041MiB / 12196MiB | 69% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 TITAN Xp Off | 00000000:82:00.0 Off | N/A |
| 23% 28C P8 9W / 250W | 2MiB / 12196MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 TITAN Xp Off | 00000000:83:00.0 Off | N/A |
| 23% 33C P8 9W / 250W | 2MiB / 12196MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 32361 C python 9119MiB |
| 1 N/A N/A 24301 C python 4039MiB |
+-----------------------------------------------------------------------------+
(automl-meta-learning) miranda9~/automl-meta-learning $ python
Python 3.8.2 (default, Mar 26 2020, 15:53:00)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
tor>>> torch.cuda.is_available()
False
>>>
duh, need to get a interactive job indicating my interactive script:
condor_submit -i interactive.sub
opps dum mistake.
Request_gpus = 1
Request_cpus = 30
requirements = (CUDADeviceName != "Tesla K40m")
# requirements = (CUDADeviceName == "Quadro RTX 6000")
Queue
eureka! thanks for that reference
Hi! I am new to CUDA & Pytorch. Could someone help me clarify some relevant concepts about it.
What is the difference between installing CUDA on the NVIDIA website and the command line with conda install torch cudatoolkit? If I install both, will there be a conflict between them?
Your locally installed CUDA toolkit won’t be used unless you build PyTorch from source or a custom CUDA extension since the PyTorch binaries ship with their own CUDA runtime dependencies.