How to check which cuda version my pytorch is using

Hello!

I have multiple CUDA versions installed on the server, e.g., /opt/NVIDIA/cuda-9.1 and /opt/NVIDIA/cuda-10, and /usr/local/cuda is linked to the latter one. I believe I installed my pytorch with cuda 10.2 based on what I get from running torch.version.cuda.

How can I check which version of CUDA that the installed pytorch actually uses in running? I set my CUDA_PATH=/opt/NVIDIA/cuda-9.1 but it still seems to run without any problem on a gpu.

Thanks,
Jaejin Cho

2 Likes

You could check the linked CUDA version via print(torch.version.cuda).

6 Likes

Thank you for the quick answer.

I actually used “torch.version.cuda” and checked it as in my original post (and it was 10.2). However, I was wondering why it still works with the setting “CUDA_PATH=/opt/NVIDIA/cuda-9.1”. In other words, how it finds the correct cuda version to use (or which environment variable it uses to do so) in run-time?

1 Like

CUDA_PATH might be the wrong env var. I’m using CUDA_HOME, which works fine.

Thank you for the answer!

Added this code in my script,

logging.warning(‘cuda version: {}’.format(torch.version.cuda))
import os
logging.warning(‘CUDA_PATH: {}’.format(os.environ[“CUDA_PATH”]))
logging.warning(‘CUDA_HOME: {}’.format(os.environ[“CUDA_HOME”]))

Then, the outputs are like below:
WARNING: cuda version: 10.2
WARNING: CUDA_PATH: /opt/NVIDIA/cuda-9.1
WARNING: CUDA_HOME: /opt/NVIDIA/cuda-9.1

It seems working fine but I am still curious what pytorch refers to for the correct CUDA version under the hood.

The CUDA_HOME env var is used during the build of PyTorch in order to compile the CUDA kernels, so you would need to rebuild it.

@ptrblck I am a little confused by cuda version, torch version.

My CUDA_HOME currently points to cuda v11.1

While my existing torch installation reads torch.version.cuda=11.7

Which cuda executive lib it is calling? I am confused because this wrong-version torch seems running quite fine.

PyTorch ships with its own CUDA dependencies and your locally installed CUDA toolkit will be used if you build PyTorch from source or a custom CUDA extension.

1 Like