Last version to support CUDA Capability 3.0

I get you can’t keep supporting older GPUs, so this is not a complaint, just seeing if there is a way to explore torch.cuda() functionality.

I am on a Mac with a GeForce GT 750M which is of CUDA capability 3.0. (Driver CUDA 9.1, cuDNN 7, MacOSX10.13.sdk). I managed to fumble through and compiled torch-0.4.0x, cmake output detects GPU, (OpenMP, Magma and NNPACK detection fails, are these critical?).

Built using the following command:

MACOSX_DEPLOYMENT_TARGET=10.13 CC=clang CXX=clang++ python install

Builds, install, time for some PyTorch goodness. But anytime I run a “.cuda()” method I get the following:

Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 12:04:33)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> t = torch.rand(3)
>>> t

[torch.FloatTensor of size 3]
>>> r = t.cuda()
/anaconda3/lib/python3.6/site-packages/torch/cuda/ UserWarning:
    Found GPU0 GeForce GT 750M which is of cuda capability 3.0.
    PyTorch no longer supports this GPU because it is too old.

  warnings.warn(old_gpu_warn % (d, name, major, capability[1]))

Umm… perhaps I have to install an older version. A little searching finds This post from a couple of days ago (Feb 18, 2018) suggests that compiling from source should work, no mention of version, so assume latest from githuh should be fine. A general search of the pytorch forum search finds posts offering the same advice.

Umm… have I done something wrong? Is there an environment variable I need to set? Do I need to install an older version? This page Previous Version, suggests it just a matter of

git checkout vX.X.X

And then follow the instruction in the

What is the last version of PyTorch to support CUDA capability 3.0 (or have I done something wrong with the compile options)?

Might be a stupid question, but Is it working?
This thread suggests to ignore the warning as far as I understand. :wink:

Thanks for the link. Umm… did ‘r’ above initialise?

>>> r

[torch.cuda.FloatTensor of size 3 (GPU 0)]

It does appear to have created a torch.cuda.FLoatTensor. Does this mean is ‘working’?



yes that means it’s working :slight_smile:

1 Like

We are using laptop to debug and compute only on dev box. Its any way to support 5.0 in future releases?

I need a bit of help on this one. I use a Quadro K2100M with compute capability of 3.0

I’ve managed to build from source (I didn’t checkout any specific version), but I’m still having kernel image availability issues.

Is there a particular checkout that I need to do to get it working?

Thanks a lot again