PyTorch no longer supports this GPU because it is too old.

Are CUDA 5.2 devices still supported?

From the code changes it looks like only 3.0 and 5.0 devices specifically are getting their support dropped. Thanks!

1 Like

BTW you don’t have to install it from source.
You may simply do
pip install --user http://download.pytorch.org/whl/cu80/torch-0.3.0-cp27-cp27mu-linux_x86_64.whl
and save yourself some time!
Torch-0.3.0 still supports my old :confused: GPU :tada:

2 Likes

Hi,
I’m a bit stuck with a M1000m gpu here. Current version is not supporting it any more:

Found GPU0 Quadro M1000M which is of cuda capability 5.0.
PyTorch no longer supports this GPU because it is too old.

Tried to compile from source (following the procedure from readme>installation>from source) but it didn’t solve it. I’m ok to use older binaries but the one suggested by @sangeet is not compatible… any other ideas?

@gdupont You can look at our page that links to older versions: http://pytorch.org/previous-versions/

0.3.0 should work with your GPU.

3 Likes

Ahah, it’s working! And now I discover my GPU doesn’t have enough memory to simply load my model :’‑(
Thanks anyway.

I use a K2200 for prototyping before I run my code on a compute server. Here are the steps to compile Pytorch in Anaconda:

First install gcc-4.9 g+±4.9 to compile the old Cuda dependencies

    sudo apt-get install gcc-4.9 g++-4.9

Now mostly the stuff from the pytorch website

    conda upgrade conda
    conda upgrade anaconda

    conda install numpy pyyaml mkl mkl-include setuptools cmake cffi typing
    conda install -c pytorch magma-cuda80

    git clone --recursive https://github.com/pytorch/pytorch
    cd pytorch/

    export CMAKE_PREFIX_PATH="$(dirname $(which conda))/../"

Get the current version (3.1.0 atm)

    git checkout origin/v0.3.1

make distutils use the 4.9 compilers

    CC=gcc-4.9 CXX=g++-4.9 python setup.py install

Unfortunately I couldn’t resolve

    ldd /home/../anaconda3/lib/python3.6/site-packages/torch/_C.cpython-36m-x86_64-linux-gnu.so
    
    [....]
    libmkl_gf_lp64.so => not found
    libmkl_gnu_thread.so => not found
    libmkl_core.so => not found
    [....]

Therefore, to run pytorch code, prefix it with

    LD_LIBRARY_PATH=/home/.../anaconda3/lib64/:/home/.../anaconda3/lib/ python my_pytorch_code.py

We are having a problem where we have a number of people on our team who are running NVIDIA Quadro M1200 cards, and the PyTorch error says that the card has CUDA capability 5.0, but the official NVIDIA page clearly says that the compute capability of this card is 5.2: https://developer.nvidia.com/cuda-gpus

Is this a PyTorch bug?

That’s strange, since on Wikipedia the card has a CUDA compute capability of 5.0 given the same source you posted. Also techpowerup states it has 5.0.

Could you check it on your system using deviceQuery? link

on popular demand, we’re bringing back 5.0 support in the next release.

8 Likes

Thank you!!!
You will notice from here that Nvidia actually had inaccurate documentation about the compute capability of some of their GPUs, so the support for compute capability 5.0 is much appreciated (especially since there weren’t many laptops available with higher compute capability with PCIe SSDs when we were sourcing them for our team): https://devtalk.nvidia.com/default/topic/1032409/cuda-setup-and-installation/incorrect-compute-capability-for-quadro-m1200/post/5253247/?offset=13#5253248

Sorry, what should we compile from source to get Pytorch working with an old GPU (which works with CUDA capability of 5.0)?

Sorry if my questions are very stupid but I am new with PyTorch and I don’t know how to get my GPU working with it.

Okay yes, it was not only a stupid question but also was already solved before.

I downloaded 0.3.0 and it is working smoothly. Thank you! :slight_smile:

1 Like

That’s great :hugs:- will this be in the next minor release of 0.4 or on 0.5? Do you have an ETA?

this will be in 0.4, tomorrow.

1 Like

Ready to scale up from my desktop to the cloud… Just got this message trying to run on AWS g2.2xlarge:

" UserWarning:
Found GPU0 GRID K520 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.

warnings.warn(old_gpu_warn % (d, name, major, capability[1]))"

Ok, so I tried installing PyTorch 0.4, but the error persists. I suppose I can build from source, or pull an older version of PyTorch.

For 0.4, which AWS instances are people using with 0.4? (https://docs.aws.amazon.com/dlami/latest/devguide/gpu.html)
Seems P2 instances run on K80s, which is has CUDA capability 5.0, which is no longer supported, right? And g3 instances are M60s which is 5.1, so is that ok?

…So are the PyTorch devs expecting that AWS users only run on p3 instances (V100s, lowest tier at $3.06 per hour), or…? Maybe this just isn’t on the radar. I understand you’re busy and you’ve got lots of other things to worry about. :wink:

1 Like

For others compiling from source, but still getting the warning, it might be working anyway. I’m getting the warning, but the GPU is still being used despite the warning being displayed. Again, this was after compiling from source.

In which version of pytorch that is? 1.0 ?

hello,can you use pytorch(GPU) 0.3.0 on windows 10 ? I cannot use the version of GPU and can only use the version of CPU! Could you do me a favor? please!!!

ive done these steps to build pytorch from source
ive installed cuda 9.1 on ubuntu with latest Nvidia driver
all running in a conda python 3.5 env

i still get:
Found GPU0 GeForce GTX 680 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.

i thought building from source should avoid this problem, did ive done something wrong?

Hi,

This should only be a warning so that does not prevent from using it.
That being said, newer functions might not work if they use newer features.