PyTorch no longer supports this GPU because it is too old.

I installed cuda 9.1 and the last nvidia driver 390.25
| NVIDIA-SMI 390.25 Driver Version: 390.25 |
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| 0 GeForce GTX 950M Off | 00000000:01:00.0 Off | N/A |
| N/A 47C P0 N/A / N/A | 455MiB / 4046MiB | 3% Default |

| Processes: GPU Memory |
| GPU PID Type Process name Usage |
| 0 1149 G /usr/lib/xorg/Xorg 192MiB |
| 0 1906 G compiz 182MiB |
| 0 2391 G …-token=88128EE1E6E9ED25FEB54AFDB2472B7F 76MiB |
| 0 3837 G /usr/bin/nvidia-settings 0MiB |

but I have an error: Found GPU0 GeForce GTX 950M which is of cuda capability 5.0.
PyTorch no longer supports this GPU because it is too old.



To reduce the size of the precompiled binaries (that were going out of hand), we had to remove some of the old GPU architecture that are used by a minority of people.
You can still use it by compiling from source and it will still work as before !

1 Like

But I installed pytorch 0.3.0 on windows 10 and everything worked on it without this error

This deprecation is new from the 0.3.1 release as you can see from the release notes here.

We’ve got PyTorch CUDA bundles with compute capabilities starting 3.0, 3.5
There is no separate python package, but you can extract the package from the installer archive.

Since compiling from source is a bit of a headache, and I have a GPU with
a cuda capability of 5.0:

Does that message mean that Pytorch doesn’t support my GPU from 0.3.1 (which is the first version which print this warning AFAIK),or that it won’t support it going forward? (which is what deprecation usually mean)

Also,what kind of operations should I expect NOT to work?
Any reasonable way for me to tell when an unsupported operation was executed?

From 0.3.1 onward, cuda capability 5.0 will not be included in the pre-packaged binary release (so all torch.cuda related stuff will not work).
You will be able to get pytorch to work with such architecture by compiling from source (so all operations will work).

I see…

well that’s it for my GPU with pytorch I guess :stuck_out_tongue:

Usually warnings or deprecations are for future releases.
Anyways, so what is the very last binary which does support my gpu?
something like ?

1 Like

Yes that would be 0.3.0 but that means that you will be missing a lot of bugfix.
Compiling from source should be pretty straighforward if if you already have cuda installed. Let me know if you need help with that.

Thanks bud, that actually was straightforward!

Just notice the version #:


Just making sure: So even for this version, I should just ignore the warning message of a too old Gpu?

Are you getting a warning when using it?

Compiling current master will give you as of writing what is going to become 0.4 in the future.
If you want to keep to stable release, you can git checkout 0.3.1 to get the exact state corresponding to the 0.3.1 release before compiling.

Yes it does give me a warning , but my code which runs mostly on a gpu works (unlike the 0.3.1 binary). Should I ignore the warning?

So there’s a better chance of the main branch having more bugs compared to the 0.3.1 branch?
If so I’ll recompile and reinstall it. Might be the last message of this thread so: Thanks a lot man, I really appreciate your help!

Are CUDA 5.2 devices still supported?

From the code changes it looks like only 3.0 and 5.0 devices specifically are getting their support dropped. Thanks!

1 Like

BTW you don’t have to install it from source.
You may simply do
pip install --user
and save yourself some time!
Torch-0.3.0 still supports my old :confused: GPU :tada:


I’m a bit stuck with a M1000m gpu here. Current version is not supporting it any more:

Found GPU0 Quadro M1000M which is of cuda capability 5.0.
PyTorch no longer supports this GPU because it is too old.

Tried to compile from source (following the procedure from readme>installation>from source) but it didn’t solve it. I’m ok to use older binaries but the one suggested by @sangeet is not compatible… any other ideas?

@gdupont You can look at our page that links to older versions:

0.3.0 should work with your GPU.


Ahah, it’s working! And now I discover my GPU doesn’t have enough memory to simply load my model :’‑(
Thanks anyway.

I use a K2200 for prototyping before I run my code on a compute server. Here are the steps to compile Pytorch in Anaconda:

First install gcc-4.9 g+±4.9 to compile the old Cuda dependencies

    sudo apt-get install gcc-4.9 g++-4.9

Now mostly the stuff from the pytorch website

    conda upgrade conda
    conda upgrade anaconda

    conda install numpy pyyaml mkl mkl-include setuptools cmake cffi typing
    conda install -c pytorch magma-cuda80

    git clone --recursive
    cd pytorch/

    export CMAKE_PREFIX_PATH="$(dirname $(which conda))/../"

Get the current version (3.1.0 atm)

    git checkout origin/v0.3.1

make distutils use the 4.9 compilers

    CC=gcc-4.9 CXX=g++-4.9 python install

Unfortunately I couldn’t resolve

    ldd /home/../anaconda3/lib/python3.6/site-packages/torch/
    [....] => not found => not found => not found

Therefore, to run pytorch code, prefix it with

    LD_LIBRARY_PATH=/home/.../anaconda3/lib64/:/home/.../anaconda3/lib/ python

We are having a problem where we have a number of people on our team who are running NVIDIA Quadro M1200 cards, and the PyTorch error says that the card has CUDA capability 5.0, but the official NVIDIA page clearly says that the compute capability of this card is 5.2:

Is this a PyTorch bug?

That’s strange, since on Wikipedia the card has a CUDA compute capability of 5.0 given the same source you posted. Also techpowerup states it has 5.0.

Could you check it on your system using deviceQuery? link