Pre-built pytorch for CUDA [compute capability] 3.0 on windows?

Hi Forum!

Would anybody know of a pre-built pytorch windows / CUDA 3.0
version? (It’s windows 10, if that matters.)

I’m aware that pytorch no longer formally supports older CUDA
versions, but I have seen older pre-built packages floating
around on the internet – just not this configuration.

(I will be installing pytorch on an older laptop with a CUDA 3.0
quadro k1100m graphics card. I’ll just be monkeying around,
so I don’t need the gpu, but I’d like to monkey around with
the gpu, as well. I’d prefer python 3, but I’d be willing to go
with python 2 to get the gpu.)

Thanks for any advice!

K. Frank

Hi Frank,

I assume you are referring to the compute capability 3.0, which should work with CUDA6.0 - CUDA10.1.

If I’m not mistaken, the minimal compute capability for the current binaries is >=3.5, so you could build from source to support this older GPU.

However, if you would like to play around with some legacy PyTorch version, you might get lucky finding some supported binaries here (built by @peterjc123).

Since these binaries are quite old by now, I would recommend building from source. :wink:

Hello Peter!

Thanks for your reply.

Indeed. I was referring to compute capability 3.0 (not that I knew
it at the time …). Thanks for clearing up that confusion of mine.

Do you know how I might determine which compute capability a
binary uses before installing it?

For example, on the main page, in the “QUICK
START LOCALLY” section, one is given the choice of CUDA
9.0 and 10.0 (and none), which I suppose I now understand
to be CUDA SDKs 9.0 and 10.0. Is there any way for me
deduce whether such a download will support compute
capability 3.0, or whether it starts earliest at 3.5?

Thanks for that link. In a similar vein, do you know how I might
figure out which of these binaries support which compute capability?

For example, the first file on that google drive is

Am I right that the “cu80” part of the file name suggests that the
binary is built with CUDA SDK 8.0 and that it could therefore
potentially support compute capability 2.0 – 6.2? Is there any
to determine its minimum required compute capability (before

(Also am I right that the .tar.bz2 files are linux binaries, while the
.whl file is the sole windows binary?)

Thanks for helping clear these things up for me.

K. Frank

The compute capability is unfortunately not encoded in the file names, so the best approach would be to install the binary and just see, if your GPU works, or print:

> PyTorch built with:
  - GCC 4.9
  - Intel(R) Math Kernel Library Version 2019.0.4 Product Build 20190411 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v0.18.1 (Git Hash 7de7e5d02bf687f971e7668963649728356e0c20)
  - OpenMP 201307 (a.k.a. OpenMP 4.0)
  - NNPACK is enabled
  - CUDA Runtime 10.0
  - NVCC architecture flags: -gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_50,code=compute_50
  - CuDNN 7.5.1
  - Magma 2.5.0
  - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS=  -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math, DISABLE_NUMA=1, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, USE_CUDA=True, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=True, USE_NNPACK=True, USE_OPENMP=ON, 

Generally yes, but the used compute capability has to be specified using the TORCH_CUDA_ARCH_LIST flag as used here.
So while the binaries support e.g. CUDA10, the compute capability might be limited so specific architectures.

That shouldn’t be the case. These .whl packages are used for linux systems.

I’m not sure, if conda uses tar.bz2, while pip uses .whl?
Anyway, both are just zipped containers, so I’m not sure if the file ending is even important. :wink:

Hello Peter!

Thanks for your reply (and sorry for being a little slow in following up).

Okay, this makes sense. (I tried installing a “CUDA 9.0” version from
the pytorch main page. It installed fined, and seemed to run – I didn’t
try anything substantive – but announced that its support for compute
capability started earliest at 3.5, and was not compatible with my 3.0
gpu. I imagine it runs fine with the cpu, but I didn’t test it.)

Now on to my follow-up question:

Coming back to your earlier post:

I downloaded a CUDA 8.0 version from your legacy-binaries link
(specifically It has some
.dll’s in it so I am supposing it is a windows build. I chose a CUDA
8.0 version in the hope that it might support compute capability 3.0
(but I don’t know that yet).

I ran:

pip3 install file:///C:/<path_to_bz_file>/pytorch-0.3.0-py36_0.3.0cu80.tar.bz2

and got the following error:

Processing c:\<path_to_bz_file>\pytorch-0.3.0-py36_0.3.0cu80.tar.bz2
    ERROR: Complete output from command python egg_info:
    ERROR: Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "c:\<path_to_python>\python36\lib\", line 452, in open
        buffer = _builtin_open(filename, 'rb')
    FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\USERXY~1\\AppData\\Local\\Temp\\pip-req-build-d2relny2\\'
ERROR: Command "python egg_info" failed with error code 1 in C:\Users\USERXY~1\AppData\Local\Temp\pip-req-build-d2relny2\

Note the “8.3” format of the username; a ten-alphabetic-character
(no spaces, numbers, or special characters) username has been
translated to the 8.3 form. I don’t know why or where.

I can confirm that C:\Users\USERXY~1\AppData\Local\Temp\ is
accessible (through the 8.3 username), and that the installing user
can create (and read) subdirectories and files there. (Note that
nothing else in the path to the “Temp” directory is longer than
eight characters, although the “pip-req-build-d2relny2” subdirectory
that pip created (or tried to create) is.)

I do not see a “pip-req-build-d2relny2” subdirectory in the Temp
directory (nor a file in such a subdirectory), so either it
wasn’t created, or pip cleaned up after itself after the install failed.

Would you (or anyone else) have some ideas about what might be
going on and how to fix it? This is on windows 10, if that matters.

Since I wasn’t able to install this particular pytorch build, I wasn’t
able to query it for which compute capability levels it supports. But
I can unzip / untar the file, so would you know if there is a
way I can figure out the compute capability from the unzipped file,
even if I can’t install it?

Thanks again for your help.

K. Frank

I assume @peterjc123 used this repo to create the legacy Windows binaries. So you might find these paths there.
I’m unfortunately not really familiar with Windows installations etc.

There might be a way using strings, grep and check all strings for the compute capability in the libraries.
Something like this might work, but I’m not even sure which library really contains all the interesting strings:

strings torch/lib/ | grep -Eo 'compute_[0-9]+' | sort --unique

Consider it a really dirty hack, but you might get lucky with this command.

That being said, I’m currently not sure, if Windows has similar tools to inspect a .dll :face_with_raised_eyebrow:

On Windows, the similar tools is called dumpbin and it is accessible when you activate the VC develop environment.The command to use would be dumpbin /rawdata torch\lib\nvrtc-builtins64_90.dll. And the result will be like:

  0000000180007990: 20 63 6F 6D 70 75 74 65 72 20 73 6F 66 74 77 61   computer softwa
  00000001800079B0: 69 61 6C 0D 0A 20 2A 20 63 6F 6D 70 75 74 65 72  ial.. * computer
  0000000180010DF0: 0D 0A 69 6E 74 20 63 6F 6D 70 75 74 65 4D 6F 64 computeMod
  0000000180019390: 0D 0A 69 6E 74 20 63 6F 6D 70 75 74 65 50 72 65 computePre
  0000000180025770: 74 20 63 6F 6D 70 75 74 65 4D 6F 64 65 3B 0D 0A  t computeMode;..
  000000018002D880: 72 79 3B 0D 0A 69 6E 74 20 63 6F 6D 70 75 74 65  ry; compute
  000000018002DD10: 74 20 63 6F 6D 70 75 74 65 50 72 65 65 6D 70 74  t computePreempt
  000000018003F5B0: 3B 0D 0A 69 6E 74 20 63 6F 6D 70 75 74 65 4D 6F  ; computeMo
  000000018003FA40: 63 6F 6D 70 75 74 65 50 72 65 65 6D 70 74 69 6F  computePreemptio
  0000000180047B50: 3B 0D 0A 69 6E 74 20 63 6F 6D 70 75 74 65 50 72  ; computePr
  0000000180059770: 69 6E 74 20 63 6F 6D 70 75 74 65 4D 6F 64 65 3B  int computeMode;
  0000000180061D10: 69 6E 74 20 63 6F 6D 70 75 74 65 50 72 65 65 6D  int computePreem
  000000018006EEA0: 3B 0D 0A 69 6E 74 20 63 6F 6D 70 75 74 65 4D 6F  ; computeMo
  000000018006F330: 63 6F 6D 70 75 74 65 50 72 65 65 6D 70 74 69 6F  computePreemptio
  0000000180077440: 3B 0D 0A 69 6E 74 20 63 6F 6D 70 75 74 65 50 72  ; computePr
  000000018007B810: 75 72 65 20 63 6F 6D 70 75 74 65 5F 36 30 20 77  ure compute_60 w
  000000018007B870: 2D 61 72 63 68 3D 63 6F 6D 70 75 74 65 5F 36 30  -arch=compute_60

You 'll have to save it to a file and remove the left part, join the lines together, and then search for compute_[0-9]+.

BTW, is the package format for Conda. You should do conda install <pkgname> and pip install <pkgname>.whl. FYI, the packages >= 0.4 have CUDA CC >= 3.5.

Hello Pu!

Just to clarify, is the file I downloaded,, a “0.3.0” package, and
therefore is “< 0.4”, and therefore would be expected to support
CUDA CC 3.0?


K. Frank

I checked my answers in the Zhihu post and yes, it seems that CC3.0 is supported.

Hi Peter and Pu!

It looks like I was able to install an old pytorch with cuda for my gpu,
but I get an error when I try to do anything.

I installed* conda and (from the “Anaconda Prompt”) used conda to
install the legacy pytorch binary:

conda install file:///C:/<path_to_bz_file>/pytorch-0.3.0-py36_0.3.0cu80.tar.bz2

I then ran python (from the “Anaconda Prompt”) and ran:

>>> import torch
>>> print(torch.__version__)
>>> print(torch.version.cuda)
>>> print(torch.cuda.is_available())
>>> print(torch.cuda.current_device())
THCudaCheck FAIL file=D:\pytorch\pytorch\torch\lib\THC\THCGeneral.c line=120 error=30 : unknown error
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\<path_to_miniconda>\Miniconda3\lib\site-packages\torch\cuda\", line 302, in current_device
  File "C:\<path_to_miniconda>\Miniconda3\lib\site-packages\torch\cuda\", line 140, in _lazy_init
RuntimeError: cuda runtime error (30) : unknown error at D:\pytorch\pytorch\torch\lib\THC\THCGeneral.c:120

So it looks like pytorch recognizes my gpu and considers it acceptable,
but can’t actually lazy_init / initialize it.

Do you think there is something broken in my installation (that I can
fix), or should I regard this as a bug in the old pytorch 0.3.0 (and
potentially give up on trying to get this gpu working with pytorch)?

*) Further details on my installation process:

First I installed miniconda, specifically,
Miniconda3-4.5.4-Windows-x86_64.exe. I chose this older version
because it was the newest miniconda version that was python 3.6,
which I assume I need for this legacy python-3.6 version of pytorch.

(Of course, conda has to live in its own private sandbox, so now I
have two independent python 3.6 installations. But I expect that
anaconda considers this a feature, rather than a bug.)

I then used conda to install the legacy pytorch, as above.

When I ran python (from the “Anaconda Prompt”), and ran:

import torch

I got the error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\<path_to_miniconda>\Miniconda3\lib\site-packages\torch\", line 76, in <module>
    from torch._C import *
ImportError: numpy.core.multiarray failed to import

Okay, no numpy. (So conda, which bills itself as a package manager,
installs pytorch, but doesn’t install its numpy dependency. I suppose
I consider this a bug, not a feature, but what do I know?)

conda install numpy

works, and now

import torch


Regarding printing out the pytorch configuration in order to discover
the minimum cuda compute capability, I guess this older version
of pytorch doesn’t support this (or uses a different syntax):

>>> import torch
>>> print(
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: module 'torch' has no attribute '__config__'

So I don’t know how to definitively probe this version of pytorch for
its minimum required compute capability (but I assume the fact that
torch.cuda.is_available() returns True means that the minimum
required compute capability is 3.0 or lower).

Anyway, thanks again for your help, and any further suggestions on
how to get this going would be appreciated.

Best regards.

K. Frank

RuntimeError: cuda runtime error (30)

might point to the driver.
Could you have a look at this post and check, if any suggestion could help?

Hi Peter and Pu!

Thanks for your help getting pytorch working with my old cuda
compute capability 3.0 gpu. Thanks also to Andrei for his
post (in the thread linked to below) for the observation
that helped get me past my sticking point.

I will select Peter’s post linking to the legacy builds as the
solution although the whole discussion has been helpful.

The solution (for me) is to use a work-around for the
“cuda runtime error (30)” issue – namely don’t call

I was led to this by this post in the thread linked to by Peter:

I’ve posted some observations about “cuda runtime error (30)” here:

Here is a script showing some simple gpu-tensor manipulations:

import torch
print (torch.__version__)
torch.cuda.get_device_capability (0)
torch.cuda.get_device_name (0)
ct = torch.cuda.FloatTensor([[3.3, 4.4, 5.5], [6.6, 7.7, 8.8]])
ct + 0.01 * ct

And here is the output:

Python 3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 13:32:41) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print (torch.__version__)
>>> torch.cuda.get_device_capability (0)
(3, 0)
>>> torch.cuda.get_device_name (0)
'Quadro K1100M'
>>> ct = torch.cuda.FloatTensor([[3.3, 4.4, 5.5], [6.6, 7.7, 8.8]])
>>> ct

 3.3000  4.4000  5.5000
 6.6000  7.7000  8.8000
[torch.cuda.FloatTensor of size 2x3 (GPU 0)]

>>> ct + 0.01 * ct

 3.3330  4.4440  5.5550
 6.6660  7.7770  8.8880
[torch.cuda.FloatTensor of size 2x3 (GPU 0)]

>>> quit()

I’ll follow up if I have any issues actually running models on
this gpu.

Thanks again.

K. Frank

1 Like