Hi Peter and Pu!
It looks like I was able to install an old pytorch with cuda for my gpu,
but I get an error when I try to do anything.
I installed* conda and (from the “Anaconda Prompt”) used conda to
install the legacy pytorch binary:
conda install file:///C:/<path_to_bz_file>/pytorch-0.3.0-py36_0.3.0cu80.tar.bz2
I then ran python (from the “Anaconda Prompt”) and ran:
>>> import torch
>>> print(torch.__version__)
0.3.0b0+591e73e
>>> print(torch.version.cuda)
8.0
>>> print(torch.cuda.is_available())
True
>>> print(torch.cuda.current_device())
THCudaCheck FAIL file=D:\pytorch\pytorch\torch\lib\THC\THCGeneral.c line=120 error=30 : unknown error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\<path_to_miniconda>\Miniconda3\lib\site-packages\torch\cuda\__init__.py", line 302, in current_device
_lazy_init()
File "C:\<path_to_miniconda>\Miniconda3\lib\site-packages\torch\cuda\__init__.py", line 140, in _lazy_init
torch._C._cuda_init()
RuntimeError: cuda runtime error (30) : unknown error at D:\pytorch\pytorch\torch\lib\THC\THCGeneral.c:120
So it looks like pytorch recognizes my gpu and considers it acceptable,
but can’t actually lazy_init / initialize it.
Do you think there is something broken in my installation (that I can
fix), or should I regard this as a bug in the old pytorch 0.3.0 (and
potentially give up on trying to get this gpu working with pytorch)?
*) Further details on my installation process:
First I installed miniconda, specifically,
Miniconda3-4.5.4-Windows-x86_64.exe. I chose this older version
because it was the newest miniconda version that was python 3.6,
which I assume I need for this legacy python-3.6 version of pytorch.
(Of course, conda has to live in its own private sandbox, so now I
have two independent python 3.6 installations. But I expect that
anaconda considers this a feature, rather than a bug.)
I then used conda to install the legacy pytorch, as above.
When I ran python (from the “Anaconda Prompt”), and ran:
import torch
I got the error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\<path_to_miniconda>\Miniconda3\lib\site-packages\torch\__init__.py", line 76, in <module>
from torch._C import *
ImportError: numpy.core.multiarray failed to import
Okay, no numpy. (So conda, which bills itself as a package manager,
installs pytorch, but doesn’t install its numpy dependency. I suppose
I consider this a bug, not a feature, but what do I know?)
conda install numpy
works, and now
import torch
works.
Regarding printing out the pytorch configuration in order to discover
the minimum cuda compute capability, I guess this older version
of pytorch doesn’t support this (or uses a different syntax):
>>> import torch
>>> print(torch.__config__.show())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'torch' has no attribute '__config__'
So I don’t know how to definitively probe this version of pytorch for
its minimum required compute capability (but I assume the fact that
torch.cuda.is_available()
returns True
means that the minimum
required compute capability is 3.0 or lower).
Anyway, thanks again for your help, and any further suggestions on
how to get this going would be appreciated.
Best regards.
K. Frank