PyTorch unable to access CUDA despite following all the right(?) steps

Running Windows 10, I did a fresh install of Anaconda, Python 3.10.6, created a fresh environment using the Anaconda Navigator on Python 3.10.4, I activated the environment in the Anaconda Terminal, and installed PyTorch for my CUDA version 11.7 using the “get-started locally” page. Once finished, I ran the commands python, import torch,torch.cuda.is_available(), which returned False, and torch.version.cuda which returned none. I don’t have cpuonly installed.

I don’t know where to go from here. I’ve looked everywhere I can think of, using every relevant term I know, and I can’t think of anything else to do but ask for help on a public forum now.

Why can’t PyTorch access CUDA? What can I do to remedy this issue?

Sorry if this issue is an extremely easy fix.

Based on the returned None from torch.version.cuda it sounds that you have unfortunately installed the CPU-only binary.

Here is a screenshot of the environment I’m trying to use CUDA, packages from “b” to “f” in the Anaconda Navigator. No CPU-only binary to be found.
And here is a screenshot of the base (root) environment, packages “co”-“cs” in AN, again with no CPU-only binary to be seen.
I’ve also searched in both environments for anything with “cpu” in it, and come up empty.
(Unless binaries and packages are two separate things within Anaconda/AN, and I’m just barely missing seeing it, in which case I am terribly sorry.)

Also, a point about the torch.version.cuda returning None thing: It doesn’t actually return a line that says None, rather, it just goes to a new line immediately. Sorry if that’s an important distinction to have made that I missed.

But also no mention of any torch+cuXXX version is mentioned there, so I don’t think this screenshot is sufficient. Check the installed PyTorch binary via pip list or conda list. Additionally, also check the install log as it would show which exact binary was installed.

This screenshot also shows irrelevant packages and no PyTorch binary.

Forgive me if I do not understand your help completely.

Running either command while the environment I set up is active tells me that I have pytorch 1.12.1, pytorch-mutex 1.0 (image), torchaudio 0.12.1, and torchvision 0.13.1 (image). In the same lists, I cannot find cpuonly, nor could I find an item resembling torch+cuXXX.

Additionally, I can either not find the appropriate log files in my computer or through the terminal, or the log which I have found in the Anaconda Navigator app does not provide a list of binaries installed.

Thank you for your continued assistance, apologies for my continued ignorance.

Your screenshot shows the cpu tag on the right in pytorch as well as torchvision and torchaudio so either uninstall these binaries and install the right ones with a CUDA runtime or create a new virtual environment and install the right ones there.

I ran the CUDA Toolkit installer from CUDA Toolkit 11.7 Update 1 Downloads | NVIDIA Developer, then I created a new environment with Python 3.10.4, activated the environment, and installed it using Conda with the command conda install cuda -c nvidia.

Again I went to Start Locally | PyTorch, and ran the command conda install pytorch torchvision torchaudio cudatoolkit=11.7 -c pytorch -c conda-forge, and, again, torch returns False to torch.cuda.is_available(), and nothing at all to torch.version.cuda, and only the cpu build of pytorch can be found when I conda list in the environment.

I figured there might be some problems with trying to install for a CUDA version that technically isn’t actually on that webpage, so I created another new environment with Python 3.10.4, activated it, installed CUDA using the aforementioned command, ran the PyTorch installation for 11.6, etc. etc. torch.cuda.is_available() returns True, and torch.version.cuda returns 11.6. Not 100% sure if it was just the 11.7 needing to be 11.6 thing or also the fact that just having an Nvidia card apparently doesn’t also mean having CUDA at the ready on your PC, but regardless, it works now.

Thank you for your assistance, apologies for my incompetence.

Your local CUDA toolkit won’t be used unless you want to build PyTorch from source or any custom CUDA extension. Installing it via conda install cuda -c nvidia is not supported in the install matrix (it will be for future CUDA 11.7 versions).

That’s expected as you are manually creating this install command which then fails.
Use the supported CUDA runtimes by selecting them in the install matrix. Right now the newest CUDA runtime is CUDA 11.6. Click on the CUDA 11.6 button, copy/paste the command, and execute it.

In any case, it’s good to hear you’ve solved the issue and it works now.