Help! How to Force PyTorch Build from Source on Older CUDA GPU?

I am a music producer and on our studio’s computers, we have hundreds of music software installed and configured in the past years, and therefore we are unable to change computers easily.

Our Windows computers are with Nvidia GTX 660M GPU, and therefore the highest Nvidia Driver we could install is version 426.00, and the highest CUDA we could install is version 10.1 Update 2.

Recently, we are working with an AI music software, which requires PyTorch 1.10.0 or above. Unfortunately, there are no Torch binaries >=1.10.0 built with CUDA 10.1 of which can be downloaded as wheel files, so I have to build from source by myself.

No big deal. I have successfully built PyTorch 1.9.1 from source on CUDA 10.1 in the past.

The problem is, when I use the same environment to build PyTorch 1.10.0, or any versions above, cmake gives me an error, saying “CUDA 10.2 is needed”, and shuts down the build process.

The error message is as below.

-- Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1 (found version "10.1")
-- Caffe2: CUDA detected: 10.1
-- Caffe2: CUDA nvcc is: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1/bin/nvcc.exe
-- Caffe2: CUDA toolkit directory: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.1

CMake Error at cmake/public/cuda.cmake:42 (message):
PyTorch requires CUDA 10.2 or above.
Call Stack (most recent call first):
cmake/Dependencies.cmake:1191 (include)
CMakeLists.txt:653 (include)

Seriously?! Isn’t “PyTorch Build from Source” the whole purpose of building any PyTorch on any CUDA? I’ve heard many people successfully built PyTorch >=1.10.0 on CUDA 10.1, but how did they make it?

So, is there an argument or command prompt I can type in, to bypass this CUDA version compatibility check, so that a newer PyTorch can be built on an older CUDA without being interrupted?

After all, “not being supported” doesn’t mean “guaranteed not working”, right?

What I want, is to build this file first, <torch-1.10.0+cu101-cp38-cp38-win_amd64.whl>, pip install it, then I can have a say if it’s working or not.

In my case, how to force PyTorch stop nagging and do her job?!

No, and it wouldn’t make sense trying to support CUDA 1.0-12.2 in PyTorch.
You could check which 10.2-specific operators were added which require this minimal release from 2019. Also note that 10.2 supports the same GPU architectures as 10.1, so I’m unsure why you would need to use 10.1.

Don’t want to complain about Nvidia, but seriously? Nvidia never released any info on compatibilities between GPU models and graphic drivers.

Attached table from Wikipedia, shows CUDA 10.1 and 10.2 have same requirement for CC 3.0 GPU models.

However, on Nvidia download page, the file name is:
cuda_10.2.89_441.22_windows.exe

Meaning: If you want to use CUDA 10.2, you are required to install attached graphic driver of 441.22.

How could I possibly know, this 441.22 driver is compatible with my GTX 660M GPU? Is it going to break my computer causing black screen of death?

Nvidia needs to be more transparent.

If CUDA is compatible with a GPU model, does it guarantee the driver comes with it, is also compatible with that GPU?

This is also untrue and the NVIDIA driver compatibility is explained here:

If you are using a new CUDA 10.x minor release, then the minimum required driver version is the same as the driver that’s packaged as part of that toolkit release.

Note the minimum specification.

Refer to the linked docs and let me know what’s missing.

Refer to the linked docs and let me know what’s missing.

To be honest, I have read through that doc back and forth at least three times in the past, but still feels like rocket science to me.

My understandings are, in plain English:

  1. CUDA 11.x has forward compatibility, CUDA 10.x doesn’t.

  2. Right now I’m on a lower version of Nvidia driver 426.00. CUDA 10.2 comes with a higher version driver of 441.22.
    Meaning, in order for me to use CUDA 10.2, I have to update my current driver to that 441.22, am I right?

  3. How could I know if this 441.22 driver is compatible with my GPU? My biggest concern!

Nvidia never said a thing about 10.x packed driver’s compatibility with a GPU.

Please elaborate.

After the discussion with Piotr @ptrblck , I did some extensive testings over the weekend, and here is what I have found.

PyTorch 1.8 - Requires CUDA 10.1 or above - GPU CC >=3.0
PyTorch 1.9 - Requires CUDA 10.1 or above - GPU CC >=3.0

PyTorch 1.10 - Requires CUDA 10.2 or above - GPU CC >=3.0
PyTorch 1.11 - Requires CUDA 10.2 or above - GPU CC >=3.0
PyTorch 1.12 - Requires CUDA 10.2 or above - GPU CC >=3.0
PyTorch 1.13 - Requires CUDA 10.2 or above - GPU CC >=3.0

PyTorch 2.0 - Requires CUDA 11.0 or above - GPU CC >=3.5

Also, no need to worry about the Nvidia driver compatibilities, as long as your GPU compute capability value meets the CUDA requirement, whatever version of Nvidia driver packed with the official CUDA toolkit will work on your GPU.

For any GPU in the Kepler sm_30 architecture category, the highest CUDA you can install is 10.2, and therefore the highest PyTorch you can build from source would be version 1.13.1.

Not exactly.
My graphical card is CC=3.0. The driver supplied with CUDA 10.1 supports the card.
However driver that comes with CUDA 10.2 does not. Hence I can not use CUDA 10.2 with my card, even though the card is CC 3.0 compatible.