excuse me I’m trying to install stable diffusion currently, however i know nothing about python and have encountered and error while installing through the webui.user.bat
AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDINE_ARGS variable to disable this check
does anyone have an idea on how to fix it or what may be causing the issue. if you do please explain in a idiot friendly way
I assume this might be the first time you are trying to install or use PyTorch?
If so, first of all welcome! Could you try to run a quick smoke test to see if you’ve installed the right PyTorch version with CUDA support e.g via:
I got this message also, since I’m running with an AMD RX6800 graphics card which doesn’t have CUDA. I’m following the guide at Arki's Stable Diffusion Guides. If you look in webui-user.bat you should see a line that looks like this:
set COMMANDLINE_ARGS=
The message is telling you to change the line to this:
set COMMANDLINE_ARGS=--skip-torch-cuda-test
This then successfully sets the project up and runs the web UI. Unfortunately, txt2img then fails with:
"RuntimeError: “LayerNormKernelImpl” not implemented for ‘Half’
This seems to be something to do with not having CUDA, but I don’t see what to do about it Damn scalpers, I’ve have bought nvidia if they were available.
I agree with your basic assertion that your GPU seems CUDA capable, and I guess Torch ought to be able to use it … but, since you get that error message about adding “–skip-torch-cuda-test to COMMANDINE_ARGS” … have you tried doing this? I mentioned how to do this in webui-user.bat in my post above. I’m very much a n00b in this software and hardware and have fallen at the first post, but it’s the only thing I can suggest that you apparently haven’t tried
How did you install torch? Did you install it with ROCm instead of CUDA in the Start Locally | PyTorch page? Beacuse I think torch refers to any GPU device via cuda even if its ROCm (correct me if I’m wrong @ptrblck)
I don’t think this is CUDA related because the LayerNormKernellmpl function is using some pytorch op which only supports float32 or float64 and not float16 aka half. So, can you share what this LayerNormKernelImpl function is?
You can run your code within a torch.autograd.set_detect_anomaly context manager, and it’ll point to the line that’s causing the issue.
Somehow adding only --skip-torch-cuda-test is not enough for my first run, so instead I added --lowvram --precision full --no-half --skip-torch-cuda-test to make it work. But the program works with only --skip-torch-cuda-test now.
In case of error like RuntimeError: "log" "_vml_cpu" not implemented for 'Half', add --precision full --no-half to COMMANDINE_ARGS.