NVIDIA GeForce RTX 5090 with CUDA capability sm_120 is not compatible with the current PyTorch installation

I keep getting this error:

torch\cuda_init_.py:230: UserWarning:
NVIDIA GeForce RTX 5090 with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5090 GPU with PyTorch, please check the instructions at Start Locally | PyTorch

Is there a workaround for Windows? I believe there is one for Linux, but as far as I can tell, nothing for Windows users. Just curious if this is being looked into.

Yes, you can use WSL in the meantime while we are building the Windows binaries.

Thank you for the confirmation. Is there an estimate on when it will be released?

Hello, any news about the windows cuda 2.8 version? i have an 5080 and i cant run it :scream: :cold_sweat: Thanks!

All nightly PyTorch binaries with CUDA 12.8 and Blackwell support are available now for Linux x86, SBSA, and Windows.

2 Likes

Thanks ptrblck! but i keep recieving the warning that dont work :sob:

UserWarning:
NVIDIA GeForce RTX 5080 with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5080 GPU with PyTorch, please check the instructions at Start Locally | PyTorch

Any advice its VERY wellcome!!! THANKS!!!

Install the nightly binaries by selecting Preview (Nightly) in the install matrix.

2 Likes

Thanks for the help Ptrblck but it dosnt work, look the screenshot :cold_sweat:

You need pip3 install -U --pre torch ...

-U (or --upgrade) upgrades the packages you already installed.

(Maybe it’s better to add this on the Start Locally page?)

1 Like

The logs tell you that other (older) versions are already found, so uninstall these.

2 Likes

OMG THANKS!!! you are the best :heart:

2 Likes

hi
for cuda 12.8
import torch
print(torch.version)
2.7.0
import torchaudio
2.6.0
how i can fix torchaudio to
2.7.0
Or there is no update

There is no torchaudio 2.7 yet. See the index URL you use to install torch: https://download.pytorch.org/whl/nightly/torchaudio/

I’m getting the same output as this, do you mean I have to uninstall all those versions that show in the output?

No, you would need to uninstall only older PyTorch binaries, which are incompatible. In the previously shared screenshot you can see that Requirement already satisfied: torch in ... (2.5.1+cu121) is shown. In this case pip uninstall torch to remove this old PyTorch 2.5.1 binary with CUDA 12.1 runtime dependencies since your Blackwell GPU requires PyTorch binaries built with CUDA 12.8+.

NVIDIA GeForce RTX 5070 Ti with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA… I swear, I’m shocked. As of July 28, 2025, you still haven’t been able to resolve this issue (I’m referring to the inability to make PyTorch fully compatible with RTX 5000 Blackwell sm_120 cards). I’ve had a RogStrix 5070ti 16GB for two months, and after reading hundreds of posts (Pytorch and Nvidia forums), no developer has had the courage to admit that the problem hasn’t been resolved, not even with that much-vaunted Preview build (Nightly). You see, it’s not so much that the Pytorch version isn’t fully compatible with the latest RTX 5000 series, that’s normal, but what I can’t stomach is the fact that you deny the evidence and treat users like they’re incompetent. If the requests for help persist, ask yourselves a few questions. Thanks, Max Corvy

All of our nightly and stable PyTorch binaries support the RTX 50XX series as was also confirmed in other threads. If you have trouble installing the right binary, please let us know.

Thanks for the reply. I’d like to add a few details about my setup to clarify the behavior I’m seeing.

:desktop_computer: Operating system: Windows 11 Pro (clean install)
:snake: Python: 3.12.1
:brain: CUDA Toolkit: 12.8 (driver 555.52, standalone install from developer.nvidia.com)
:repeat_button: cuDNN: 8.9.7 (manually copied to /bin, /lib, /include)
:package: PyTorch: nightly 2.3.0.dev2025xxxx installed via pip with CUDA 12.8 backend
:package: Torchvision / Torchaudio: compatible nightly builds
:package: TensorRT: 10.13.0 RTX Edition (used marginally for benchmarking)

I’ve tested everything in clean venv environments, no custom builds.
Despite the stated support for RTX 50XX, fallback to sm_120 still triggers, causing warnings and unusual log behavior — even with standard models like ResNet and UNet.

Specific questions:

  1. Is there a binary tested on Windows + Python 3.12 + CUDA 12.8 for RTX 50XX?
  2. Is fallback to sm_120 expected behavior or a hardware detection issue?
  3. Is there a way to properly force sm_89 in official builds or disable auto-detect?

I understand that support is claimed, but a more technical confirmation for OS, Python, and CUDA would help clarify things — not just for me, but for others in the community seeing the same behavior.

Thanks in advance for engaging on this. :folded_hands:

@ptrblck While using ubuntu 22.04. I get the error:

Traceback (most recent call last):
  File "/home/robo/face_detect/main.py", line 464, in <module>
    main(opt)
  File "/home/robo/face_detect/main.py", line 457, in main
    run(**vars(opt))
  File "/home/robo/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
    return func(*args, **kwargs)
  File "/home/robo/face_detect/main.py", line 193, in run
    model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half)
  File "/home/robo/face_detect/models/common.py", line 489, in __init__
    model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse)
  File "/home/robo/face_detect/models/experimental.py", line 99, in attempt_load
    ckpt = (ckpt.get("ema") or ckpt["model"]).to(device).float()  # FP32 model
  File "/home/robo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1175, in float
    return self._apply(lambda t: t.float() if t.is_floating_point() else t)
  File "/home/robo/face_detect/models/yolo.py", line 208, in _apply
    self = super()._apply(fn)
  File "/home/robo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 928, in _apply
    module._apply(fn)
  File "/home/robo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 928, in _apply
    module._apply(fn)
  File "/home/robo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 928, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "/home/robo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 955, in _apply
    param_applied = fn(param)
  File "/home/robo/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1175, in <lambda>
    return self._apply(lambda t: t.float() if t.is_floating_point() else t)
torch.AcceleratorError: CUDA error: no kernel image is available for execution on the device
Search for `cudaErrorNoKernelImageForDevice' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information.
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

My Config:

Python-3.11
torch-2.9.0.dev20250726+cu128
CUDA:12.8 
(NVIDIA GeForce RTX 5060 Ti, 15848MiB)

ANy help is highly appreciated.