Im trying to install stable diffusion bur im dumb

excuse me I’m trying to install stable diffusion currently, however i know nothing about python and have encountered and error while installing through the webui.user.bat

AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDINE_ARGS variable to disable this check

does anyone have an idea on how to fix it or what may be causing the issue. if you do please explain in a idiot friendly way

1 Like

I assume this might be the first time you are trying to install or use PyTorch?
If so, first of all welcome! Could you try to run a quick smoke test to see if you’ve installed the right PyTorch version with CUDA support e.g via:

python -c "import torch; print(torch.randn(1).cuda())"

If this returns an error, your installation doesn’t support your GPU and we would need to figure out why.

Could you then post the output of python -m torch.utils.collect_env here by wrapping it into three backticks ```?

Also, which install command were you using to install the PyTorch binaries?

2 Likes

I got this message also, since I’m running with an AMD RX6800 graphics card which doesn’t have CUDA. I’m following the guide at Arki's Stable Diffusion Guides. If you look in webui-user.bat you should see a line that looks like this:

set COMMANDLINE_ARGS=

The message is telling you to change the line to this:

set COMMANDLINE_ARGS=--skip-torch-cuda-test

This then successfully sets the project up and runs the web UI. Unfortunately, txt2img then fails with:
"RuntimeError: “LayerNormKernelImpl” not implemented for ‘Half’

This seems to be something to do with not having CUDA, but I don’t see what to do about it :slight_smile: Damn scalpers, I’ve have bought nvidia if they were available.

1 Like

Hi, I ran into the same problem too. I followed this tutorial (Tutorial: CUDA, cuDNN, Anaconda, Jupyter, PyTorch Installation in Windows 10 | by Sik-Ho Tsang | Medium) to install CUDA, cuDNN, Anaconda, Jupyter, PyTorch, but webui.user.bat still gave the same error

AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDINE_ARGS variable to disable this check

Running the line python -c "import torch; print(torch.randn(1).cuda())" returns:

tensor([0.4252], device='cuda:0')

and python -m torch.utils.collect_env returns:

PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A

OS: Microsoft Windows 10 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A

Python version: 3.9.12 (main, Apr  4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19044-SP0
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti
Nvidia driver version: 517.48
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] numpydoc==1.2
[pip3] torch==1.12.1
[pip3] torchaudio==0.12.1
[pip3] torchvision==0.13.1
[conda] blas                      1.0                         mkl
[conda] cudatoolkit               11.3.1               h59b6b97_2
[conda] mkl                       2021.4.0           haa95532_640
[conda] mkl-service               2.4.0            py39h2bbff1b_0
[conda] mkl_fft                   1.3.1            py39h277e83a_0
[conda] mkl_random                1.2.2            py39hf11a4ad_0
[conda] numpy                     1.21.5           py39h7a0a035_1
[conda] numpy-base                1.21.5           py39hca35cd5_1
[conda] numpydoc                  1.2                pyhd3eb1b0_0
[conda] pytorch                   1.12.1          py3.9_cuda11.3_cudnn8_0    pytorch
[conda] pytorch-mutex             1.0                        cuda    pytorch
[conda] torchaudio                0.12.1               py39_cu113    pytorch
[conda] torchvision               0.13.1               py39_cu113    pytorch

I think my GPU is capable to be used by Torch. What have I done wrong?

I agree with your basic assertion that your GPU seems CUDA capable, and I guess Torch ought to be able to use it … but, since you get that error message about adding “–skip-torch-cuda-test to COMMANDINE_ARGS” … have you tried doing this? I mentioned how to do this in webui-user.bat in my post above. I’m very much a n00b in this software and hardware and have fallen at the first post, but it’s the only thing I can suggest that you apparently haven’t tried :slight_smile:

Hi @neekfenwick,

How did you install torch? Did you install it with ROCm instead of CUDA in the Start Locally | PyTorch page? Beacuse I think torch refers to any GPU device via cuda even if its ROCm (correct me if I’m wrong @ptrblck)

I don’t think this is CUDA related because the LayerNormKernellmpl function is using some pytorch op which only supports float32 or float64 and not float16 aka half. So, can you share what this LayerNormKernelImpl function is?

You can run your code within a torch.autograd.set_detect_anomaly context manager, and it’ll point to the line that’s causing the issue.

1 Like

Hi @neekfenwick,

Somehow adding only --skip-torch-cuda-test is not enough for my first run, so instead I added --lowvram --precision full --no-half --skip-torch-cuda-test to make it work. But the program works with only --skip-torch-cuda-test now.

In case of error like RuntimeError: "log" "_vml_cpu" not implemented for 'Half', add --precision full --no-half to COMMANDINE_ARGS.