[Question] I have a question about installing PyTorch


I have a question for experts. I’m trying to install PyTorch, but I’m encountering errors. I have installed CUDA 12.3.0. Is PyTorch not compatible with CUDA 12.3.0? Please help.

PyTorch is compatible with CUDA 12.3 and will use your locally installed CUDA toolkit for source builds. The binaries ship with their own CUDA dependencies, won’t use your local CUDA toolkit, and only a properly installed NVIDIA driver is needed.


hank you for your response.

Despite upgrading to CUDA 12.3.0, installing the appropriate NVIDIA driver for your PC, and updating pip to the latest version, you are still encountering an error. The error message is as follows:

“Looking in links: https://download.pytorch.org/whl/cu123/torch_stable.html
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch”

The command you used is: pip install torch torchvision torchaudio -f https://download.pytorch.org/whl/cu123/torch_stable.html

To address this issue, please ensure that the CUDA version you have (CUDA 12.3.0) is officially supported by checking the PyTorch website. Additionally, double-check the version specifier for torch in your installation command to ensure it is correct. Modify your command accordingly,

Make sure to adjust the CUDA version in the command to match your setup. If the issue persists, consider trying different combinations or checking the PyTorch GitHub repository for any updates or changes in supported versions.

Feel free to provide additional details or logs if the problem persists, and I’ll do my best to assist you.

I don’t know if you’ve just posted a response from a bot, but https://download.pytorch.org/whl/cu123/torch_stable.html is an invalid URL and you should use the provided ones from here.

The issue has been resolved. Thank you for your guidance.

So, PyTorch 2.1.1 (and the nightly builds) can effectively run with CUDA version 12.3, and there is no need to downgrade to 12.1? @ptrblck

No, you would need to build from source if you need CUDA 12.3 as explained before.

1 Like

I have the same issue. where llama fails running on the GPU.
Traced it to torch!

| NVIDIA-SMI 545.29.06 | Driver Version: 545.29.06 | CUDA Version: 12.3 |

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Nov__3_17:16:49_PDT_2023
Cuda compilation tools, release 12.3, V12.3.103
Build cuda_12.3.r12.3/compiler.33492891_0

export PATH=/usr/local/cuda-12.3/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-12.3/lib64:$LD_LIBRARY_PATH

python -c “import torch; print(‘PyTorch Version:’, torch.version, ‘\nCUDA Version:’, torch.version.cuda, ‘\nCUDA Available:’, torch.cuda.is_available())”

PyTorch Version: 2.1.1
CUDA Version: 12.1
CUDA Available: False

llama_new_context_with_model: compute buffer total size = 278.43 MiB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
16:54:47.668 [INFO ] private_gpt.components.embedding.embedding_component - Initializing the embedding model in mode=local
16:54:48.561 [WARNING ] py.warnings - /home/anaconda3/envs/privategpt/lib/python3.11/site-packages/torch/cuda/init.py:611: UserWarning: Can’t initialize NVML
warnings.warn(“Can’t initialize NVML”)

Thanks @ptrblck for your clarification. I’ll proceed with a manual build of Torch with CUDA 12.3 now…

Refer to Bug Report No Version Of Pytorch for cuda 12.3 · Issue #112500 · pytorch/pytorch · GitHub
with Python 3.12.1 and CUDA 12.3, the Preview (Nightly) version of PyTorch works :slight_smile:
OS: Windows 10 64bit