Transferring data to GPU with CUDA falls into an infinite loop

I met a wired problem when running these simple lines:

import torch
cuda0 = torch.device('cuda:0')
x = torch.tensor([1., 2.], device=cuda0)

And I strace the process and find it falls into an infinite loop

The OS is Centos 7.5, pytorch 0.40 with CUDA 9.0. There are two GPU cards on the computer and the “cuda:0” is a Tesla K40c:

02:00.0 VGA compatible controller: NVIDIA Corporation GK107GL [Quadro K420] (rev a1)
02:00.1 Audio device: NVIDIA Corporation GK107 HDMI Audio Controller (rev a1)
81:00.0 3D controller: NVIDIA Corporation GK110BGL [Tesla K40c] (rev a1)

I installed the CUDA 9.0 with rpm and samples turned out to be OK:

./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 2 CUDA Capable device(s)

Device 0: "Tesla K40c"
  CUDA Driver Version / Runtime Version          9.2 / 9.0
  CUDA Capability Major/Minor version number:    3.5
  Total amount of global memory:                 11441 MBytes (11996954624 bytes)
  (15) Multiprocessors, (192) CUDA Cores/MP:     2880 CUDA Cores
  GPU Max Clock rate:                            745 MHz (0.75 GHz)
  Memory Clock rate:                             3004 Mhz
  Memory Bus Width:                              384-bit
  L2 Cache Size:                                 1572864 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 129 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Device 1: "Quadro K420"
  CUDA Driver Version / Runtime Version          9.2 / 9.0
  CUDA Capability Major/Minor version number:    3.0
  Total amount of global memory:                 1998 MBytes (2094989312 bytes)
  ( 1) Multiprocessors, (192) CUDA Cores/MP:     192 CUDA Cores
  GPU Max Clock rate:                            876 MHz (0.88 GHz)
  Memory Clock rate:                             891 Mhz
  Memory Bus Width:                              128-bit
  L2 Cache Size:                                 262144 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 2 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
> Peer access from Tesla K40c (GPU0) -> Quadro K420 (GPU1) : No
> Peer access from Quadro K420 (GPU1) -> Tesla K40c (GPU0) : No

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.2, CUDA Runtime Version = 9.0, NumDevs = 2
Result = PASS

and bandwidth test:

Running on...

 Device 0: Tesla K40c
 Quick Mode

 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			10300.5

 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			10240.1

 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			183721.5

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

and

In [3]: torch.cuda.is_available()
Out[3]: True

I don’t know where the problem is. Could someone help me please?

Hi,

How did you installed pytorch?

I’m not sure it’s an infinite loop, what you see here is that the cpu is chatting with the gpu no?
Anyway, this line is expected to take some time, few seconds maybe.
Could you run the following lines in the repl and let me know in which one it freezes:

import torch
print(torch.cuda._initialized)
torch.cuda._lazy_init()
print(torch.cuda._initialized)
a = torch.rand(10)
a = a.cuda()

Thanks for your help! I simply installed Anaconda3 and use ‘conda update pytorch’ to updated Pytorch. I ran the test and the output is

Python 3.6.6 |Anaconda custom (64-bit)| (default, Jun 28 2018, 17:14:51) 
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.cuda._initialized)
False
>>> torch.cuda._lazy_init()

It stopped at the ‘torch.cuda._lazy_init()’ line.

I’m not sure what the reason could be given that the cuda samples seem to work…
Could you double check that you installed the version of pytorch packaged with the same version of cuda as the one you have installed as well as all necessary packages (the conda cuda92 package for example if you use cuda 9.2).

I checked and found a weird “py36_cuda9.0.176_cudnn7.1.2_1[cuda91]” package from soumith. So I deleted it and installed the right version of Pytorch. It seems the error came up because I first installed Anaconda3 (along with packages including Pytorch and etc.) and then CUDA. After the exchange I could initialize CUDA with Pytorch but I got another error:

/home/user/anaconda3/lib/python3.6/site-packages/torch/cuda/__init__.py:116: UserWarning: 
    Found GPU0 Quadro K420 which is of cuda capability 3.0.
    PyTorch no longer supports this GPU because it is too old.
    
  warnings.warn(old_gpu_warn % (d, name, major, capability[1]))
/home/user/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py:52: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
  warnings.warn(warning.format(ret))
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1532440087449/work/aten/src/THC/generic/THCTensorMath.cu line=244 error=48 : no kernel image is available for execution on the device
Traceback (most recent call last):
  File "tr1.py", line 153, in <module>
    for ii, data in enumerate(train_loader,0):
  File "/home/user/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 317, in __next__
    batch = self.collate_fn([self.dataset[i] for i in indices])
  File "/home/user/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 194, in default_collate
    return [default_collate(samples) for samples in transposed]
  File "/home/user/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 194, in <listcomp>
    return [default_collate(samples) for samples in transposed]
  File "/home/user/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 171, in default_collate
    return torch.stack(batch, 0, out=out)
RuntimeError: cuda runtime error (48) : no kernel image is available for execution on the device at /opt/conda/conda-bld/pytorch_1532440087449/work/aten/src/THC/generic/THCTensorMath.cu:244

“nvidia-smi” said it can’t find one of the GPU so I reboot the system. Now the GPUs seems to be OK

[user@centos ~]$ nvidia-smi
Sun Jul 29 13:23:06 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.37                 Driver Version: 396.37                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Quadro K420         Off  | 00000000:02:00.0  On |                  N/A |
| 28%   56C    P8    N/A /  N/A |    134MiB /  1997MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla K40c          Off  | 00000000:81:00.0 Off |                    0 |
| N/A   49C    P8    21W / 235W |     11MiB / 11441MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      2145      G   /usr/bin/X                                   121MiB |
+-----------------------------------------------------------------------------+

But CUDA won’t initialize again and the code stops at the same line:

[user@centos ml]$ ipython
Python 3.6.6 |Anaconda custom (64-bit)| (default, Jun 28 2018, 17:14:51) 
Type 'copyright', 'credits' or 'license' for more information
IPython 6.4.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import torch

In [2]: print(torch.cuda._initialized)
False

In [3]: torch.cuda._lazy_init()
^Z
[6]+  Stopped                 ipython

My conda list reads as:

[user@centos ~]$ conda list
# packages in environment at /home/user/anaconda3:
#
# Name                    Version                   Build  Channel
_ipyw_jlab_nb_ext_conf    0.1.0                    py36_0  
alabaster                 0.7.11                   py36_0  
anaconda                  custom           py36hbbc8b67_0  
anaconda-client           1.6.14                   py36_0  
anaconda-navigator        1.8.7                    py36_0  
anaconda-project          0.8.2                    py36_0  
appdirs                   1.4.3            py36h28b3542_0  
asn1crypto                0.24.0                   py36_0  
astroid                   1.6.5                    py36_0  
astropy                   3.0.3            py36h14c3975_2  
atomicwrites              1.1.5                    py36_0  
attrs                     18.1.0                   py36_0  
automat                   0.7.0                    py36_0  
babel                     2.6.0                    py36_0  
backcall                  0.1.0                    py36_0  
backports                 1.0                      py36_1  
backports.shutil_get_terminal_size 1.0.0                    py36_2  
beautifulsoup4            4.6.0                    py36_1  
bitarray                  0.8.3            py36h14c3975_0  
bkcharts                  0.2                      py36_0  
blas                      1.0                         mkl  
blaze                     0.11.3                   py36_0  
bleach                    2.1.3                    py36_0  
blosc                     1.14.3               hdbcaa40_0  
bokeh                     0.13.0                   py36_0  
boto                      2.49.0                   py36_0  
bottleneck                1.2.1            py36h035aef0_1  
bzip2                     1.0.6                h14c3975_5  
ca-certificates           2018.03.07                    0  
cairo                     1.14.12              h8948797_3  
certifi                   2018.4.16                py36_0  
cffi                      1.11.5           py36h9745a5d_0  
chardet                   3.0.4                    py36_1  
click                     6.7                      py36_0  
cloudpickle               0.5.3                    py36_0  
clyent                    1.2.2                    py36_1  
colorama                  0.3.9                    py36_0  
conda                     4.5.8                    py36_0  
conda-build               3.12.0                   py36_1  
conda-env                 2.6.0                         1  
conda-verify              3.1.0                    py36_0  
constantly                15.1.0           py36h28b3542_0  
contextlib2               0.5.5                    py36_0  
cryptography              2.2.2            py36h14c3975_0  
cuda90                    1.0                  h6433d27_0    pytorch
cudatoolkit               9.0                  h13b8566_0  
cudnn                     7.1.2                 cuda9.0_0  
curl                      7.61.0               h84994c4_0  
cycler                    0.10.0                   py36_0  
cython                    0.28.4           py36hf484d3e_0  
cytoolz                   0.9.0.1          py36h14c3975_1  
dask                      0.18.2                   py36_0  
dask-core                 0.18.2                   py36_0  
datashape                 0.5.4                    py36_1  
dbus                      1.13.2               h714fa37_1  
decorator                 4.3.0                    py36_0  
distributed               1.22.0                   py36_0  
docutils                  0.14                     py36_0  
entrypoints               0.2.3                    py36_2  
et_xmlfile                1.0.1                    py36_0  
expat                     2.2.5                he0dffb1_0  
fastcache                 1.0.2            py36h14c3975_2  
filelock                  3.0.4                    py36_0  
flask                     1.0.2                    py36_1  
flask-cors                3.0.6                    py36_0  
fontconfig                2.13.0               h9420a91_0  
freetype                  2.9.1                h8a8886c_0  
fribidi                   1.0.4                h14c3975_0  
future                    0.16.0                   py36_0  
get_terminal_size         1.0.0                haa9412d_0  
gevent                    1.3.5            py36h14c3975_0  
glib                      2.56.1               h000015b_0  
glob2                     0.6                      py36_0  
gmp                       6.1.2                h6c8ec71_1  
gmpy2                     2.0.8            py36h10f8cd9_2  
graphite2                 1.3.11               h16798f4_2  
greenlet                  0.4.14           py36h14c3975_0  
gst-plugins-base          1.14.0               hbbd80ab_1  
gstreamer                 1.14.0               hb453b48_1  
h5py                      2.8.0            py36h8d01980_0  
harfbuzz                  1.8.4                hec2c2bc_0  
hdf5                      1.10.2               hba1933b_1  
heapdict                  1.0.0                    py36_2  
html5lib                  1.0.1                    py36_0  
hyperlink                 18.0.0                   py36_0  
icu                       58.2                 h9c2bf20_1  
idna                      2.7                      py36_0  
imageio                   2.3.0                    py36_0  
imagesize                 1.0.0                    py36_0  
incremental               17.5.0                   py36_0  
intel-openmp              2018.0.3                      0  
ipykernel                 4.8.2                    py36_0  
ipython                   6.4.0                    py36_1  
ipython_genutils          0.2.0                    py36_0  
ipywidgets                7.3.0                    py36_0  
isort                     4.3.4                    py36_0  
itsdangerous              0.24                     py36_1  
jbig                      2.1                  hdba287a_0  
jdcal                     1.4                      py36_0  
jedi                      0.12.1                   py36_0  
jeepney                   0.3.1                    py36_0  
jinja2                    2.10                     py36_0  
jpeg                      9b                   h024ee3a_2  
jsonschema                2.6.0                    py36_0  
jupyter                   1.0.0                    py36_4  
jupyter_client            5.2.3                    py36_0  
jupyter_console           5.2.0                    py36_1  
jupyter_core              4.4.0                    py36_0  
jupyterlab                0.32.1                   py36_0  
jupyterlab_launcher       0.10.5                   py36_0  
keyring                   13.2.1                   py36_0  
kiwisolver                1.0.1            py36hf484d3e_0  
lazy-object-proxy         1.3.1            py36h14c3975_2  
libcurl                   7.61.0               h1ad7b7a_0  
libedit                   3.1.20170329         h6b74fdf_2  
libffi                    3.2.1                hd88cf55_4  
libgcc-ng                 7.2.0                hdf63c60_3  
libgfortran-ng            7.2.0                hdf63c60_3  
libpng                    1.6.34               hb9fc6fc_0  
libsodium                 1.0.16               h1bed415_0  
libssh2                   1.8.0                h9cfc8f7_4  
libstdcxx-ng              7.2.0                hdf63c60_3  
libtiff                   4.0.9                he85c1e1_1  
libtool                   2.4.6                h544aabb_3  
libuuid                   1.0.3                h1bed415_2  
libxcb                    1.13                 h1bed415_1  
libxml2                   2.9.8                h26e45fe_1  
libxslt                   1.1.32               h1312cb7_0  
llvmlite                  0.24.0           py36hdbcaa40_0  
locket                    0.2.0                    py36_1  
lxml                      4.2.3            py36hf71bdeb_0  
lzo                       2.10                 h49e0be7_2  
markupsafe                1.0              py36h14c3975_1  
matplotlib                2.2.2            py36hb69df0a_2  
mccabe                    0.6.1                    py36_1  
mistune                   0.8.3            py36h14c3975_1  
mkl                       2018.0.3                      1  
mkl-service               1.1.2            py36h651fb7a_4  
mkl_fft                   1.0.4            py36h4414c95_1  
mkl_random                1.0.1            py36h4414c95_1  
more-itertools            4.2.0                    py36_0  
mpc                       1.1.0                h10f8cd9_1  
mpfr                      4.0.1                hdf1c602_3  
mpmath                    1.0.0                    py36_2  
msgpack-python            0.5.6            py36h6bb024c_0  
multipledispatch          0.5.0                    py36_0  
navigator-updater         0.2.1                    py36_0  
nbconvert                 5.3.1                    py36_0  
nbformat                  4.4.0                    py36_0  
nccl                      1.3.5                 cuda9.0_0  
ncurses                   6.1                  hf484d3e_0  
networkx                  2.1                      py36_0  
ninja                     1.8.2            py36h6bb024c_1  
nltk                      3.3.0                    py36_0  
nose                      1.3.7                    py36_2  
notebook                  5.6.0                    py36_0  
numba                     0.39.0           py36h04863e7_0  
numexpr                   2.6.5            py36hedc7406_0  
numpy                     1.15.0           py36h1b885b7_0  
numpy-base                1.15.0           py36h3dfced4_0  
numpydoc                  0.8.0                    py36_0  
odo                       0.5.1                    py36_0  
olefile                   0.45.1                   py36_0  
openpyxl                  2.5.4                    py36_0  
openssl                   1.0.2o               h20670df_0  
packaging                 17.1                     py36_0  
pandas                    0.23.3           py36h04863e7_0  
pandoc                    2.2.1                h629c226_0  
pandocfilters             1.4.2                    py36_1  
pango                     1.42.1               h8589676_0  
parso                     0.3.1                    py36_0  
partd                     0.3.8                    py36_0  
patchelf                  0.9                  hf484d3e_2  
path.py                   11.0.1                   py36_0  
pathlib2                  2.3.2                    py36_0  
patsy                     0.5.0                    py36_0  
pcre                      8.42                 h439df22_0  
pep8                      1.7.1                    py36_0  
pexpect                   4.6.0                    py36_0  
pickleshare               0.7.4                    py36_0  
pillow                    5.2.0            py36heded4f4_0  
pip                       10.0.1                   py36_0  
pixman                    0.34.0               hceecf20_3  
pkginfo                   1.4.2                    py36_1  
pluggy                    0.6.0                    py36_0  
ply                       3.11                     py36_0  
prometheus_client         0.3.0                    py36_0  
prompt_toolkit            1.0.15                   py36_0  
psutil                    5.4.6            py36h14c3975_0  
ptyprocess                0.6.0                    py36_0  
py                        1.5.4                    py36_0  
pyasn1                    0.4.3                    py36_0  
pyasn1-modules            0.2.2                    py36_0  
pycodestyle               2.4.0                    py36_0  
pycosat                   0.6.3            py36h14c3975_0  
pycparser                 2.18                     py36_1  
pycrypto                  2.6.1            py36h14c3975_9  
pycurl                    7.43.0.2         py36hb7f436b_0  
pyflakes                  2.0.0                    py36_0  
pygments                  2.2.0                    py36_0  
pylint                    1.9.2                    py36_0  
pyodbc                    4.0.23           py36hf484d3e_0  
pyopenssl                 18.0.0                   py36_0  
pyparsing                 2.2.0                    py36_1  
pyqt                      5.9.2            py36h22d08a2_0  
pysocks                   1.6.8                    py36_0  
pytables                  3.4.4            py36ha205bf6_0  
pytest                    3.6.3                    py36_0  
pytest-arraydiff          0.2              py36h39e3cac_0  
pytest-astropy            0.4.0                    py36_0  
pytest-doctestplus        0.1.3                    py36_0  
pytest-openfiles          0.3.0                    py36_0  
pytest-remotedata         0.3.0                    py36_0  
python                    3.6.6                hc3d631a_0  
python-dateutil           2.7.3                    py36_0  
pytorch                   0.4.1           py36_cuda9.0.176_cudnn7.1.2_1    soumith
pytz                      2018.5                   py36_0  
pywavelets                0.5.2            py36h035aef0_2  
pyyaml                    3.13             py36h14c3975_0  
pyzmq                     17.0.0           py36h14c3975_3  
qt                        5.9.6                h52aff34_0  
qtawesome                 0.4.4                    py36_0  
qtconsole                 4.3.1                    py36_0  
qtpy                      1.4.2                    py36_1  
readline                  7.0                  ha6073c6_4  
requests                  2.19.1                   py36_0  
rope                      0.10.7                   py36_0  
ruamel_yaml               0.15.42          py36h14c3975_0  
scikit-image              0.14.0           py36hf484d3e_1  
scikit-learn              0.19.1           py36hedc7406_0  
scipy                     1.1.0            py36hc49cb51_0  
seaborn                   0.9.0                    py36_0  
secretstorage             3.0.1                    py36_0  
send2trash                1.5.0                    py36_0  
service_identity          17.0.0           py36h28b3542_0  
setuptools                39.2.0                   py36_0  
simplegeneric             0.8.1                    py36_2  
singledispatch            3.4.0.3                  py36_0  
sip                       4.19.8           py36hf484d3e_0  
six                       1.11.0                   py36_1  
snappy                    1.1.7                hbae5bb6_3  
snowballstemmer           1.2.1                    py36_0  
sortedcollections         1.0.1                    py36_0  
sortedcontainers          2.0.4                    py36_0  
sphinx                    1.7.6                    py36_0  
sphinxcontrib             1.0                      py36_1  
sphinxcontrib-websupport  1.1.0                    py36_1  
spyder                    3.3.0                    py36_0  
spyder-kernels            0.2.4                    py36_0  
sqlalchemy                1.2.10           py36h14c3975_0  
sqlite                    3.24.0               h84994c4_0  
statsmodels               0.9.0            py36h035aef0_0  
sympy                     1.2                      py36_0  
tblib                     1.3.2                    py36_0  
terminado                 0.8.1                    py36_1  
testpath                  0.3.1                    py36_0  
tk                        8.6.7                hc745277_3  
toolz                     0.9.0                    py36_0  
torchvision               0.1.9            py36h7584368_1    soumith
tornado                   5.0.2            py36h14c3975_0  
traitlets                 4.3.2                    py36_0  
twisted                   17.5.0                   py36_0  
typing                    3.6.4                    py36_0  
unicodecsv                0.14.1                   py36_0  
unixodbc                  2.3.6                h1bed415_0  
urllib3                   1.23                     py36_0  
wcwidth                   0.1.7                    py36_0  
webencodings              0.5.1                    py36_1  
werkzeug                  0.14.1                   py36_0  
wheel                     0.31.1                   py36_0  
widgetsnbextension        3.3.0                    py36_0  
wrapt                     1.10.11          py36h14c3975_2  
xlrd                      1.1.0                    py36_1  
xlsxwriter                1.0.5                    py36_0  
xlwt                      1.3.0                    py36_0  
xz                        5.2.4                h14c3975_4  
yaml                      0.1.7                had09818_2  
zeromq                    4.2.5                hf484d3e_0  
zict                      0.1.3                    py36_0  
zlib                      1.2.11               ha838bed_2  
zope                      1.0                      py36_0  
zope.interface            4.5.0            py36h14c3975_0

My question is the same as yours. Is your problem solved?

CUDA_VISIBLE_DEVICES=1 would help you.

1 Like

I gave up on the antique K80 card and bought a RTX 2080 Ti.