Compiling from source for compute capability 2.0 and Cuda 8.0

I’ve got a machine with a Tesla C2075 and a Quadro 5000. Both of these cards are compute capability of 2.0, and from this post, it appears I need to be using CUDA 8.0 or lower 1.

Additionally, my graphics cards do not support CuDNN. When I follow the instructions on the “Get Started Locally” page 2, it installs perfectly and even returns true for torch.cuda.is_available(), but alas it errors out saying my GPU is too old to be supported when I try to move something into GPU memory. My guess is it has something to do with the fact it’s installing this version of PyTorch, which references CuDNN: py3.7_cuda8.0.61_cudnn7.1.2_2.

My next idea was to try building from source as it appears from this post 3 that I don’t strictly need CuDNN support to use Cuda. However, when I try to build PyTorch from source with the steps specified in the getting started page 2 which specifically indicate Cuda 8.0, CMake encounters an error when it detects Cuda 8.0, saying “PyTorch requires CUDA 9.0 and above.”

Unless I’m missing something, it seems like that documentation is not in sync with the latest release. I think it’d be worth my while to get this working if it is feasible to do so, but now that a custom build has refused to compile, I’m not sure what to try next.