Is CUDA 12.0 supported with any pytorch version?

If you look at this page, there are commands how to install a variety of pytorch versions given the CUDA version. However, the only CUDA 12 version seems to be 12.1. My cluster machine, for which I do not have admin right to install something different, has CUDA 12.0.

I tried to modify one of the lines like:

conda install pytorch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 pytorch-cuda=12.0 -c pytorch -c nvidia

But this did not work. Is there a way to install pytorch with CUDA 12.0? If so, is there documentation on how to go about that? Thanks!

Yes, the current PyTorch code base supports all CUDA 12 toolkit versions if you build from source. The install matrix on the website shows the prebuilt binaries which ship with their own CUDA runtime dependencies. If you install these your locally installed CUDA toolkit won’t be used. You would only need to properly install an NVIDIA driver.

1 Like

Thank you @ptrblck. So are you saying I need to build from source against CUDA 12.0? Or are you saying I can type something like:

pip3 install torch torchvision torchaudio

And the torch binary installed with ignore CUDA 12.0 on the system and use it’s own internal CUDA 12.1 libraries? Or are you saying both. :slight_smile:

For posterity, using

pip3 install torch torchvision torchaudio

did work assuming I had the CUDA 12.0 driver installed as verified by the nvidia-smi command. The only issue is I had to update my environment variables for this one library for whatever reason:

export LD_LIBRARY_PATH=$HOME/.local/lib/python3.10/site-packages/nvidia/nvjitlink/lib:$LD_LIBRARY_PATH
1 Like

Yes, this is the case. If you have a local CUDA toolkit installed and are seeing library conflicts, removing the locally installed CUDA toolkit from the LD_LIBRARY_PATH can be used as a workaround.

Good to hear it’s working now! :slight_smile: