How does PyTorch pick cuda versions?

I always use the standard download selection panel to match cuda versions (https://pytorch.org/):

pip3 install torch==1.8.2+cu102 torchvision==0.9.2+cu102 torchaudio==0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html

My two questions:

  1. Does PyTorch look at strictly /usr/local/cuda's linkage and decide what directory to dig into? If I have cuda linked to cuda-10.1, but have cuda-10.2 around, what would torch==x.y.z+cu102 pick?
--> ls -hdlt /usr/local/cuda*
drwxr-xr-x 20 root root 4.0K Oct  9 13:00 /usr/local/cuda-10.2/
lrwxrwxrwx  1 root root   20 Jul 22 13:37 /usr/local/cuda -> /usr/local/cuda-10.1/
drwxr-xr-x 25 root root 4.0K Jun 23 21:23 /usr/local/cuda-11.0/
drwxr-xr-x 19 root root 4.0K Apr 22  2021 /usr/local/cuda-10.1/
drwxr-xr-x 19 root root 4.0K Apr 22  2021 /usr/local/cuda-11.1/
  1. I’ve found that even if there were only one cuda-10.1 in /usr/local, and torch==x.y.z+cu102 would still work, most of the time. When do I expect it not to work, and what’s the caveat of having mismatches, minor or major?

Your local CUDA toolkit won’t be used unless you build PyTorch from source or a custom CUDA extension, since the pip wheels and conda binaries use their own CUDA runtime.
You would thus only need to install a valid NVIDIA driver and can use the binaries directly.

That explains… So the choice really depends on whether the driver is compatible with the pytorch cuda version to be installed.