PyTorch 2.1 compatibility with CUDA 12.3

llama fails running on the GPU.
Traced it to torch! Torch is using CUDA 12.1.Tried multiple different approaches where I removed 12.1 to make it use 12.3 downgraded the Nvidia driver. No joy! All help is appreciated. Running on a openSUSE tumbleweed.
PyTorch Version: 2.1.1
CUDA Version: 12.1
CUDA Available: False


| NVIDIA-SMI 545.29.06 | Driver Version: 545.29.06 | CUDA Version: 12.3 |

nvcc: NVIDIA (R) Cuda compiler driver

Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Nov__3_17:16:49_PDT_2023
Cuda compilation tools, release 12.3, V12.3.103
Build cuda_12.3.r12.3/compiler.33492891_0

export PATH=/usr/local/cuda-12.3/bin:$PATH

export LD_LIBRARY_PATH=/usr/local/cuda-12.3/lib64:$LD_LIBRARY_PATH

python -c “import torch; print(‘PyTorch Version:’, torch.version, ‘\nCUDA Version:’, torch.version.cuda, ‘\nCUDA Available:’, torch.cuda.is_available())”

PyTorch Version: 2.1.1

CUDA Version: 12.1
CUDA Available: False

llama_new_context_with_model: compute buffer total size = 278.43 MiB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
16:54:47.668 [INFO ] private_gpt.components.embedding.embedding_component - Initializing the embedding model in mode=local
16:54:48.561 [WARNING ] py.warnings - /home/anaconda3/envs/privategpt/lib/python3.11/site-packages/torch/cuda/init.py:611: UserWarning: Can’t initialize NVML
warnings.warn(“Can’t initialize NVML”)

Check your NVIDIA driver and reinstall it if needed.

Already did! tried with the following versions. No joy!
NVIDIA-Linux-x86_64-535.129.03.run
NVIDIA-Linux-x86_64-545.29.06.run

The issue is now solved!

Could you post your solution here?

My friend, if you are prepared to receive help on a technical forum, please also be prepared to post the solution if you arrive to it yourself.

1 Like

have you an idea to install it for.

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Cuda compilation tools, release 12.3, V12.3.107
Build cuda_12.3.r12.3/compiler.33567101_0

to use resolve:
import torch
print(torch.cuda.is_available())
=>False

Follow these instructions and it should work.

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
Defaulting to user installation because normal site-packages is not writeable
Looking in indexes: https://download.pytorch.org/whl/cu121
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch

Trying

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 --no-cache-dir

?

You asked your initial question in this topic which is specific to CUDA 12.3. If you really need this version for some reason you would need to build PyTorch from source as indicated in my previous answer.
Now you post a pip install command instead, which would install PyTorch binaries with CUDA dependencies.
Could you explain what you actually want?

I want to use the GPU NVIDIA via jupyter…

after installing
with (!pip install torch torchvision torchaudio -f https://download.pytorch.org/whl/cu123/torch_stable.html)
or
(!pip3 install torch torchvision torchaudio -f https://download.pytorch.org/whl/torch_stable.html)

it seem good, only if it is not for :

**Defaulting to user installation because normal site-packages is not writeable …
Requirement already satisfied…

But
Unfortunately, the response is the same.


import torch torch.cuda.is_available()

import torch

torch.cuda.is_available()

Out[1]:

False

:pray:

Uninstall the current PyTorch binary and just run a supported commend from the install matrix.

(base) C:\Users\aymen>conda uninstall pytorch
Collecting package metadata (repodata.json): done
Solving environment: failed

PackagesNotFoundError: The following packages are missing from the target environment:

  • pytorch

How are you importing torch if nothing was installed?
In the end you should either uninstall all PyTorch binaries and install the current stable or nightly release (a simple pip install torch would be enough to install torch==2.1.2+cu121) or create a new and empty virtual environment and install PyTorch there.

I have eraze all (conda )
install again same prbs
actually I have this one

The following packages will be UPDATED:

pytorch pkgs/main::pytorch-2.1.0-cpu_py311hd0~ → pytorch::pytorch-2.1.2-py3.11_cuda12.1_cudnn8_0
torchvision pkgs/main::torchvision-0.15.2-cpu_py3~ → pytorch::torchvision-0.16.2-py311_cu121

Proceed ([y]/n)? y

Downloading and Extracting Packages

CancelledError()
CancelledError()
CancelledError()
CondaError: Downloaded bytes did not match Content-Length
url: https://conda.anaconda.org/pytorch/win-64/pytorch-2.1.2-py3.11_cuda12.1_cudnn8_0.tar.bz2
target_path: C:\ProgramData\anaconda3\pkgs\pytorch-2.1.2-py3.11_cuda12.1_cudnn8_0.tar.bz2
Content-Length: 1339081254
downloaded bytes: 53705125

CancelledError()
CancelledError()

for
(base) C:\Users\aymen>conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia

but

The following packages will be UPDATED:

pytorch pkgs/main::pytorch-2.1.0-cpu_py311hd0~ → pytorch::pytorch-2.1.2-py3.11_cuda11.8_cudnn8_0
torchvision pkgs/main::torchvision-0.15.2-cpu_py3~ → pytorch::torchvision-0.16.2-py311_cu118

Proceed ([y]/n)? y

Downloading and Extracting Packages

Preparing transaction: done
Executing transaction: done

is work!

Else

torch.cuda.is_available()

give FALSE unfortunately!

1 Like

brother, have you got the solution ? my cuda version is 12.3 . I’ve installed cuda , cudnn but cant install pytoch. "import torch
print (torch.version)
print (torch.cuda.is_available())
2.3.0+cpu
False " please let us know

my cuda version is 12.3 . I’ve installed cuda , cudnn but cant install pytoch. "import torch
print (torch.version )
print (torch.cuda.is_available())
2.3.0+cpu
False " please let us know

You’ve installed the CPU-only binary and would need to install the PyTorch binary with CUDA runtime dependencies by selecting it in the install matrix on our webite.

thank you for reply. I’ve tried several times to install " stable = windows= conda= python= cuda (12.1) " but the installation was never completed. it says "(‘Connection broken: IncompleteRead(23132496 bytes read, 1201416417 more expected)’, IncompleteRead(23132496 bytes read) & Downloading and Extracting Packages:

CondaHTTPError: HTTP 000 CONNECTION FAILED for url https://conda.anaconda.org/nvidia/win-64/libcusolver-dev-11.4.4.55-
Elapsed: -

An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.

CondaHTTPError: HTTP 000 CONNECTION FAILED for url https://conda.anaconda.org/nvidia/win-64/libcusolver-dev-11.4.4.55-
Elapsed: -

An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.

(base) PS C:\Users\hp>" and one more thing – when i checked "conda list " it has the “torch 2.3.0+cu121 pypi_0 pypi & torchaudio 2.3.0+cu121 pypi_0 pypi” but can’t find any pytorch version neither cpu nor cuda. tried both ways “conda install” and “pip3 install”

" import torch
print(torch.version)
print(torch.version.cuda)
print(torch.randn(1).cuda())

2.3.0+cpu
None
AssertionError: Torch not compiled with CUDA enabled

what should i do now?please let me know what to do next

The error points to a connection issue when trying to download packages from conda, so you might want to check your connection or try to download it later as suggested in the error message.