I get you can’t keep supporting older GPUs, so this is not a complaint, just seeing if there is a way to explore torch.cuda() functionality.
I am on a Mac with a GeForce GT 750M which is of CUDA capability 3.0. (Driver CUDA 9.1, cuDNN 7, MacOSX10.13.sdk). I managed to fumble through and compiled torch-0.4.0x, cmake output detects GPU, (OpenMP, Magma and NNPACK detection fails, are these critical?).
Built using the following command:
MACOSX_DEPLOYMENT_TARGET=10.13 CC=clang CXX=clang++ python setup.py install
Builds, install, time for some PyTorch goodness. But anytime I run a “.cuda()” method I get the following:
Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 12:04:33) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> t = torch.rand(3) >>> t 0.6910 0.1801 0.8268 [torch.FloatTensor of size 3] >>> r = t.cuda() /anaconda3/lib/python3.6/site-packages/torch/cuda/__init__.py:116: UserWarning: Found GPU0 GeForce GT 750M which is of cuda capability 3.0. PyTorch no longer supports this GPU because it is too old. warnings.warn(old_gpu_warn % (d, name, major, capability)) >>>
Umm… perhaps I have to install an older version. A little searching finds This post from a couple of days ago (Feb 18, 2018) suggests that compiling from source should work, no mention of version, so assume latest from githuh should be fine. A general search of the pytorch forum search finds posts offering the same advice.
Umm… have I done something wrong? Is there an environment variable I need to set? Do I need to install an older version? This page Previous Version, suggests it just a matter of
git checkout vX.X.X
And then follow the instruction in the README.md.
What is the last version of PyTorch to support CUDA capability 3.0 (or have I done something wrong with the compile options)?