I’m a beginner with Pytorch and I’m having trouble installing. I’m using Ubuntu 20.04, I’ve got a Nvidia Quadro M2200 graphics card and I’ve installed cuda 12.8 (its the only version of Cuda I can get to work…). Ubuntu comes with Python 3.8.10, so as per the pytorch install instructions I have downloaded python 3.9.5 with pip 20.0.2 and I’m using a virtual environment to use python 3.9.
On the pytorch “start locally” page, the only way to use 12.8 appears to be to use the preview nightly build, and the install command it generates is:
Looking in indexes: https://download.pytorch.org/whl/nightly/cu128
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch
I don’t understand this claim as you are unable to install the binaries so it’s unclear to me how only CUDA 12.8 works on your system which deprecated Maxwell architectures (your M2200).
Keep in mind that PyTorch binaries ship with their own CUDA runtime dependencies and locally installed CUDA toolkits won’t be used unless you build PyTorch from source or custom CUDA extensions.
Since the Maxwell architecture is already deprecated in CUDA 12.8 our PyTorch binaries also do not support them anymore.
To use your Maxwell GPU you would have to install PyTorch binaries with an older CUDA version (11.8, 12.4, or 12.6).
Ah, ok. If Maxwell architecture isn’t supported in 12.8 at all then that’ll be why! Thanks very much, I’ll go back to trying to get cuda 12.6 to install and use that. Or do you mean I don’t have to install cuda separately and that its all bundled up in pytorch? If so, I wish I’d understood that earlier, it would have saved me a lot of time!
When I say it worked, I mean I followed the cuda installation instructions and nvidia driver instructions on the nvidia website, and played around with them until I finally got them to all run without errors and reach the part of their post-install instructions where I built the cuda-samples and ran ./deviceQuery. It correctly recognised the device etc, so I concluded it was successfully installed, as previous attempts (12.6, earlier attempts following the install instructions for 12.8) didn’t get that far or didn’t recognise the device.
It’s deprecated so if you are building from source it could still work, but I would recommend sticking to an older CUDA release, which supported it properly.
Yes, you don’t need to install a CUDA toolkit locally as the PyTorch binary ships with its own CUDA runtime dependencies.