PyTorch 2.3 with Cuda 12.4 wont download GPU version

Hi, I cant get pytorch to download the GPU version. I have tried multiple times and am coming up with the same issue.

±----------------------------------------------------------------------------------------+
| NVIDIA-SMI 552.44 Driver Version: 552.44 CUDA Version: 12.4 |
|-----------------------------------------±-----------------------±---------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 2060 WDDM | 00000000:01:00.0 On | N/A |
| N/A 41C P8 7W / 80W | 62MiB / 6144MiB | 0% Default |
| | | N/A |
±----------------------------------------±-----------------------±---------------------+

±----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
±----------------------------------------------------------------------------------------+

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Mar_28_02:30:10_Pacific_Daylight_Time_2024
Cuda compilation tools, release 12.4, V12.4.131
Build cuda_12.4.r12.4/compiler.34097967_0

and in python I run a script and get

AssertionError: Torch not compiled with CUDA enabled
CUDA Available: False
PyTorch version: 2.3.0+cpu
CUDA Version: None
Number of GPUs: 0

I tried installing: pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu124

pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121

All with the same outcome

also tried: pip install torch==2.3.0+cu121 torchvision==0.18.0+cu121 torchaudio==2.3.0+cu121 -f https://download.pytorch.org/whl/torch_stable.html

Assuming you are using a supported Python version, just run pip install torch and it’ll install the latest PyTorch release with CUDA dependencies.

Oh, how I wish that were true for aarch64–although admittedly I have not actually tried simply ‘pip install torch’. I’ve looked for the wheels on your site, but I suppose there is always the possibility that I missed them somehow. If that’s the case, please simply point me in the right direction and disregard the rest of this message, heh. A 2.3.0+cu121 aarch64 container appeared briefly on Docker Hub, and I was elated. Then, it disappeared. I know that the NGC containers exist, but they don’t use pytorch releases, have some non-standard names for other packages, and a whole bunch of packages I don’t need. I presume that you guys removed that container for a reason, but if it should indeed be there, please put it back! ;-). The on-demand pricing for GH200s is just too good for me to pass up.

Thanks.

The ARM+CUDA wheels are not published yet, but if you don’t want to use the NGC containers, you can download and install the wheels from the builder jobs, e.g. this one (scroll down and download the wheel for the Python version you are using).

Note that only sm_90 is currently supported in these wheels.

Hi, I am using NVIDIA 3090 on Windows 11 with driver version 551.86 and CUDA 12.4. I created a new environment using conda create -n newEnv python=3.10, and activated. The default packages are:

Package Version


pip 24.0
setuptools 69.5.1
wheel 0.43.0

I installed:
py -m pip install nvidia-cudnn-cu12
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia

Input:
import torch; print(torch.cuda.is_available()); print(torch.version)
Output:
True
‘2.3.0’

I want to know why my pytroch version is not CUDA compiled? I expect the pytorch version to be something like: 2.3.0+cu121.

Thanks for your support in advance.

This would be the case for pip wheels, but not conda binaries. Check the CUDA runtime version via torch.version.cuda and keep in mind your locally installed CUDA toolkit won’t be used since PyTorch ships with its own runtime dependencies.