Help with RTX 5090

Hello, I’m not good with coding and have been trying everything to get a stable diffusion to work, I have:
an RTX 5090 Laptop
Python 3.10.7
Pytorch 2.7.1
Cuda 12.8

and I am still getting the error code:

C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\timm\models\layers_init_.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {name} is deprecated, please import via timm.layers", FutureWarning)
No module ‘xformers’. Proceeding without it.

C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\cuda_init_.py:215: UserWarning:
NVIDIA GeForce RTX 5090 Laptop GPU with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5090 Laptop GPU GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

i get a lot after that but I think this is the main issue, I have looked through similar posts and can only find to get to where i am here, could someone please help?

You would need to install the latest stable or nightly binary with CUDA 12.8 by selecting the right CUDA version in our install matrix and copy/pasting the command to your Python environment.

Thank you, but I don’t know what that means:

what exactly is the install matrix? is it (https://pytorch.org/get-started/locally/)?
(i did the “pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128” thing)

and how do i know what the right CUDA version is?

and is the python environment the part that says “set PYTHON=”?

Yes

In this case you might have had an older PyTorch binary using an older CUDA toolkit, as the error message points to a PyTorch build with CUDA <=12.6.

CUDA 12.8 is required for Blackwell.

I just ran this:

C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Wed_Apr__9_19:29:17_Pacific_Daylight_Time_2025
Cuda compilation tools, release 12.9, V12.9.41
Build cuda_12.9.r12.9/compiler.35813241_0

I will keep trying things, here is the entirety of the command:

Microsoft Windows [Version 10.0.26100.4061]
(c) Microsoft Corporation. All rights reserved.

C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Wed_Apr__9_19:29:17_Pacific_Daylight_Time_2025
Cuda compilation tools, release 12.9, V12.9.41
Build cuda_12.9.r12.9/compiler.35813241_0

C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui>call webui.bat
venv "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments:
C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
No module 'xformers'. Proceeding without it.
C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\__init__.py:215: UserWarning:
NVIDIA GeForce RTX 5090 Laptop GPU with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5090 Laptop GPU GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

  warnings.warn(
Loading weights [6ce0161689] from C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\configs\v1-inference.yaml
C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:943: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 7.0s (prepare environment: 1.6s, import torch: 2.3s, import gradio: 0.7s, setup paths: 0.4s, initialize shared: 0.2s, other imports: 0.4s, load scripts: 0.7s, create ui: 0.3s, gradio launch: 0.3s).
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
  File "C:\Users\slade\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\slade\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\slade\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\shared_items.py", line 175, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\sd_models.py", line 845, in load_model
    load_model_weights(sd_model, checkpoint_info, state_dict, timer)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\sd_models.py", line 440, in load_model_weights
    model.load_state_dict(state_dict, strict=False)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\sd_disable_initialization.py", line 223, in <lambda>
    module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\sd_disable_initialization.py", line 221, in load_state_dict
    original(module, state_dict, strict=strict)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2138, in load_state_dict
    load(self, state_dict)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
    load(child, child_state_dict, child_prefix)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
    load(child, child_state_dict, child_prefix)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
    load(child, child_state_dict, child_prefix)
  [Previous line repeated 1 more time]
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2120, in load
    module._load_from_state_dict(
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\sd_disable_initialization.py", line 225, in <lambda>
    linear_load_from_state_dict = self.replace(torch.nn.Linear, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(linear_load_from_state_dict, *args, **kwargs))
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\sd_disable_initialization.py", line 191, in load_from_state_dict
    module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\_meta_registrations.py", line 4516, in zeros_like
    res.fill_(0)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.



Stable diffusion model failed to load
Applying attention optimization: Doggettx... done.
Exception in thread Thread-18 (load_model):
Traceback (most recent call last):
  File "C:\Users\slade\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\slade\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\initialize.py", line 154, in load_model
    devices.first_time_calculation()
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\devices.py", line 281, in first_time_calculation
    conv2d(x)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 599, in network_Conv2d_forward
    return originals.Conv2d_forward(self, input)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Loading weights [6ce0161689] from C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\configs\v1-inference.yaml
C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:943: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
  File "C:\Users\slade\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "C:\Users\slade\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\ui.py", line 1165, in <lambda>
    update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\shared_items.py", line 175, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\sd_models.py", line 693, in get_sd_model
    load_model()
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\sd_models.py", line 845, in load_model
    load_model_weights(sd_model, checkpoint_info, state_dict, timer)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\sd_models.py", line 440, in load_model_weights
    model.load_state_dict(state_dict, strict=False)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\sd_disable_initialization.py", line 223, in <lambda>
    module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\sd_disable_initialization.py", line 221, in load_state_dict
    original(module, state_dict, strict=strict)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2138, in load_state_dict
    load(self, state_dict)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
    load(child, child_state_dict, child_prefix)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
    load(child, child_state_dict, child_prefix)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2126, in load
    load(child, child_state_dict, child_prefix)
  [Previous line repeated 1 more time]
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2120, in load
    module._load_from_state_dict(
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\sd_disable_initialization.py", line 225, in <lambda>
    linear_load_from_state_dict = self.replace(torch.nn.Linear, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(linear_load_from_state_dict, *args, **kwargs))
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\modules\sd_disable_initialization.py", line 191, in load_from_state_dict
    module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
  File "C:\Users\slade\OneDrive\Desktop\AI\stable-diffusion-webui\venv\lib\site-packages\torch\_meta_registrations.py", line 4516, in zeros_like
    res.fill_(0)
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.



Stable diffusion model failed to load


Your locally installed CUDA toolkit does not matter since the PyTorch binaries ship with their own CUDA runtime dependencies.

so how do i not use the locally installed ones and use the pytorch binaries ones instead then?

pip uninstall torch -y 
pip uninstall torch -y # rerun this command a few times to make sure all PyTorch binaries are deleted

# install PyTorch from: https://pytorch.org/get-started/locally/
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128

Verify the installation worked:

import torch
print(torch.version.cuda)
print(torch.cuda.get_arch_list())
1 Like