Say I’m developing a model with a Turing architecture GPU. Based on the official installation guide, my requirements.txt
could simply be
torch>=1.10
which works fine because CUDA 10.2 is the default runtime (which is compatible with Turing). However, if a colleague wants to continue developing the model or we’re looking to deploy on a machine with an Ampere architecture GPU, we’d need CUDA >= 11.1 from
-f https://download.pytorch.org/whl/cu113/torch_stable.html
torch==1.10.0+cu113
Is there a best practice which would allow both environments to share a requirements file such that running pip install -r requirements.txt
would result in the correct CUDA version for both GPU architectures?
I suppose one solution is always using the latest CUDA that PyTorch is shipping, but what if the older GPU architecture is not backwards compatible? Are there any other downsides to using the latest?