If you look at this page, there are commands how to install a variety of pytorch versions given the CUDA version. However, the only CUDA 12 version seems to be 12.1. My cluster machine, for which I do not have admin right to install something different, has CUDA 12.0.
Yes, the current PyTorch code base supports all CUDA 12 toolkit versions if you build from source. The install matrix on the website shows the prebuilt binaries which ship with their own CUDA runtime dependencies. If you install these your locally installed CUDA toolkit won’t be used. You would only need to properly install an NVIDIA driver.
did work assuming I had the CUDA 12.0 driver installed as verified by the nvidia-smi command. The only issue is I had to update my environment variables for this one library for whatever reason:
Yes, this is the case. If you have a local CUDA toolkit installed and are seeing library conflicts, removing the locally installed CUDA toolkit from the LD_LIBRARY_PATH can be used as a workaround.
Hello Sir,
My Cuda Version is 12.5 and Driver version is 555.85 . There is no available compatible pytorch version for this. There is a trouble when running this nvidia/tts_en_fastpitch. What needs to be done to sort out this issue?
Thanks.
Is it right to interpret the fact that “the torch binary installed [will] ignore CUDA 12.0 on the system and use it’s own internal CUDA 12.1 libraries” as meaning that CUDA is statically linked to the PyTorch install, or am I thinking about this all wrong?
No, only the CUDA runtime is statically linked to PyTorch as is the common approach. Other CUDA dependencies will be installed as dependencies as pip packages and will be dynamically opened during the runtime.