Torch.cuda.is_available() is False - CUDA:12.2 - rtx 4070

Machine learning newbie here, stuck on the first step of learning PyTorch of installing CUDA. I’ve been trying to get CUDA working on my system for the past day to no avail. I’ve created multiple environments then tried installing pytorch using the below config of pip and also conda but no of them worked

I’ve tried downloading multiple versions of the cuda toolkit (cuda_12.2.0_536.25_windows.exe and cuda_11.8.0_windows_network.exe) and they both didn’t help. I also tried using conda install cudatoolkit in another environment but that didn’t work as well even though I also instead python-cuda==11.8.

Tbh I don’t really know what versions of one software need what versions of the other software. I am very confused. Like what does the CUDA version mean in nvidia-smi? Is this a driver version? hardware? does it support older version of pytorch-cuda ? or older versions of cuda toolkit? Is CUDA toolkit version supposed to be the same version was my CUDA version in nvidia-smi and pytorch-cuda?

nvidia-msi:
±--------------------------------------------------------------------------------------+
| NVIDIA-SMI 536.25 Driver Version: 536.25 CUDA Version: 12.2 |
|-----------------------------------------±---------------------±---------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 4070 WDDM | 00000000:01:00.0 On | N/A |
| 0% 31C P8 7W / 200W | 1015MiB / 12282MiB | 1% Default |
| | | N/A |
±----------------------------------------±---------------------±---------------------+

1 Like

You would only need to properly install the NVIDIA driver, not the CUDA toolkit, since PyTorch ships with all CUDA dependencies.

Really? That’s all I need to install? All the youtube videos I watch said to install the cuda toolkit from https://developer.nvidia.com/cuda-downloads

How do I check if I’ve properly installed the NVIDIA driver? It looks fine to me yet
image

And so I can install pytorch with any compute platform version like CUDA 11.7 or CUDA 11.8 or 12.1 and get Torch.cuda.is_available() == True?

I managed to get it working.This is what I did. idk how and why it works but it does.

I uninstalled python but kept anaconda (along with anaconda’s python, pip, etc files)
I uninstalled all the CUDA toolkits and their associated packages like NSight then installed the 11.8 toolkit.
I installed the pytorch with the following command:


I did NOT setup a conda environment using conda create -n env this time, instead I’m doing everything in the base conda environment. In this base conda environment I ran the pip install from above.

Yes, you would only need an NVIDIA driver and your locally installed CUDA toolkit will be used if you are building from source or custom CUDA extensions. I cannot comment on the Youtube videos you were watching.

Good to hear it’s working now!

To double check the installation process above is correct I uninstalled Python and Anaconda while making sure to remove the folders left behind after their installation. I then re-installed anaconda only, not python. I then pip installed pytorch and its related packages in my base environment using this:
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121

Note I still have the 11.8 CUDA toolkit from before

Again, your locally installed CUDA toolkit won’t be used.
And yes, conda can be used to manage your virtual environments and installing PyTorch using the provided install commands for the pip wheels or conda binaries in an empty environment will work.

1 Like

Yup. I ended up uninstalling the CUDA toolkit and torch.cuda.is_available() was still True

Hello. I have the same problem.
Nvidia GTX 1650
driver version: 536.99
CUDA Toolkit 12.2

I install Python with this command: “pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121”, that I receive this message “Requirement already satisfied” in anaconda prompt

and when I run “torch.cuda.is_available()” in jupyter notebook, the output is “False” :frowning:
pleeeeease, can you help me? I’ve searched a lot on the internet and I’ve tried many ways other than uninstalling the 12.2 version and installing the 11.8 version.

1 Like

Most likely a CPU-only binary is found so uninstall all previous PyTorch installation and install the desired one afterwards.

1 Like

@ptrblck My device also is RTX 4070. Is there any way to install pytorch with cuda 12.2? Or other command can do it? THX

thanks a lot.
Your suggestion worked exactly for me and my problem was solved.

If you really need CUDA 12.2 for whatever reason, you would need to build from source. If 12.1 would work you can install the nightly binaries as already explained.

Hi I have 12.2 with a RTX 3050 mobile, if I select Stable(2.10) with 12.1 will it work.

Followup question I also need to install the cudNN (8.9.4 for cuda 12.x) before doing all this.

Yes, it’ll work.

No, since the PyTorch binaries ship with all CUDA dependencies, including cuDNN, cuBLAS, NCCL, etc.

Thank you so much. Will try and update with the results

What do you mean? Can you please explain in brief. I have RTX 4060 with cuda 12.4 . When, I execute

torch.cuda.is_available()

I am getting false. What should I do?

You might have installed the CPU-only binary or your NVIDIA driver is not properly installed. Check if your PyTorch binary ships with CUDA dependencies via print(torch.version.cuda) and make sure a valid version is returned.

hello, i have RTX 4070 as well. i have tried several things to activate CUDA but nothing works. The NVIDIA driver is up to date. i have tried to install pytorch with command pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

how do i know which CUDA version i need/can use? according to your explanations I dont need to install CUDA toolkit.

which command i need to use to install pytorch with the CUDA dependencies?

Thanks a lot for your support!

That’s correct. Your locally installed CUDA toolkit won’t be used unless you build PyTorch from source or a custom CUDA extension.

pip install torch will install the latest stable PyTorch release with CUDA 12.1 dependencies. For other versions you should check the commands provided in the install matrix.