Cuda.is_available always returns false

I have been messing around with some machine learning code and always stumble upon gpu related issues. I am almost certain that my gpu can be used because I have used applications such as stable diffusion webui and it works fine. However, the moment I try to set it up myself it just never works. I have been looking for solutions and I am guessing that my cuda version is too low? (6.1 I think) Any suggestions?

Python version: 3.10.10 (tags/v3.10.10:aad5f6a, Feb 7 2023, 17:20:36) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19044-SP0
Is CUDA available: False
CUDA runtime version: 12.1.66
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1060 with Max-Q Design
Nvidia driver version: 531.14
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Could you describe in more detail how these two environments might differ and how you’ve installed each of these?

Stable diffusion provides installation script for users and this comes in the form of a web ui. However, there is also the option for people who might prefer a more programmatic way, for these people there is the source code.

I am not 100% sure but for the webui version I would assume that it is installed with a container so all the dependencies are resolved within the container. If you want to use the source code, you would have to setup all the stuffs yourself, all the pip packages and whatnot.

I believe there is no support for CUDA 12.1 with any PyTorch version. If you refer to the installation guide from: you shouldn’t have any problems. What @ptrblck meant is probably the commands that you used to install pytorch with, e.g.:
pip install torch... and the way you installed the cuda runtime, e.g. manually or through pytorch etc.

We do not publish binaries with CUDA 12.1, but source builds will work as we are already using them.

Yes, since @mich1 explained that “stable diffusion webui and it works fine” while the custom setup fails.

That is a hard question. I cannot remember but I believe followed the setups on github and installed pytorch with the command they provided.

I have downgrade my cuda version to 11.6, however, after running python -m torch.utils.collect_env I still get false on cuda’s availability. On top of that I am now receiving RuntimeError: don't know how to restore data location of (tagged with gpu) from running whisper (openai). Any cool tips?

cool, fixed.

what i did:

  • verify cuda version
    – cuda version is not supported because its too new
    – downgrade cuda version
  • verify pytorch version to match cuda version
    – uninstall uncessary cuda versions from pip to ensure pytorch and cuda version is the same
    – e.g pytorch cuda117 should be used with cuda117 (not sure why i thought its backward compatible and wouldn’t be an issue if you have different versions but i made sure it is same)

RuntimeError: don't know how to restore data location of (tagged with gpu) Ignore this, I simply passed the wrong option in the command