Requirements.txt for multiple CUDA architectures

Say I’m developing a model with a Turing architecture GPU. Based on the official installation guide, my requirements.txt could simply be

torch>=1.10

which works fine because CUDA 10.2 is the default runtime (which is compatible with Turing). However, if a colleague wants to continue developing the model or we’re looking to deploy on a machine with an Ampere architecture GPU, we’d need CUDA >= 11.1 from

-f https://download.pytorch.org/whl/cu113/torch_stable.html
torch==1.10.0+cu113

Is there a best practice which would allow both environments to share a requirements file such that running pip install -r requirements.txt would result in the correct CUDA version for both GPU architectures?

I suppose one solution is always using the latest CUDA that PyTorch is shipping, but what if the older GPU architecture is not backwards compatible? Are there any other downsides to using the latest?

So either you can have a conditional requirements.txt pip - Is there a way to have a conditional requirements.txt file for my Python application based on platform? - Stack Overflow or create a different one for each cuda version (which is pretty common)

@marksaroufim I’m familiar with conditional requirements.txt based on Windows vs. Linux, but AFAIK there isn’t a way to specify CUDA version with that syntax. Is there a trick I’m missing?