Does pytorch use the cudatoolkit in Docker or the system

I installed cuda10.1 in the docker container. Upon running nvcc --version , I get

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243

Now on my actual machine when nvcc --version is ran, I get

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Nov__3_21:07:56_CDT_2017
Cuda compilation tools, release 9.1, V9.1.85

If my pytorch in the docker container was installed with pip install torch==1.5.1+cu101 torchvision==0.6.1+cu101, does this mean that pytorch will simply use its own version of cuda?

Yes, the pip wheels and conda binaries ship with their own CUDA runtime (as well as cuDNN. NCCL etc.) so you would only need to install the NVIDIA driver. If you want to build PyTorch from source or a custom CUDA extension, the local CUDA toolkit will be used.

Hello. I came across another situation: my docker-machine has cuda11.4 installed and I installed torch only through pip install torch==1.13.1 (the version specification doesn’t end with +cu11x). While I will have torch.cuda.is_available() return True even though I neither specified the corresponding cuda version nor found any .so library files in my environment’s torch installation. Does it mean that pytorch will use the system cudatoolkit (11.4) instead?

No, this shouldn’t be the case and you could check torch.version.cuda to see which CUDA version in used in your binaries.

Thank you for your reply! That’s right, I’ve found that pip will automatically download the corresponding cuda, cudnn and cublas if I don’t specify the corresponding cuda version of pytorch.