My RTX5080 GPU can't work with PyTorch

I think that I’ve already done that, but right now I’m pretty much just copying and pasting commands from threads like this in an seeing if they solve it.

Are you able to give me a quick 1,2,3 on how to actually do that, so that I know I’m doing it the right way.

I take it that this should be done via the Windows DOS\Command prompt?

No you did not uninstall all PyTorch binaries as the install command claims PyTorch is already installed.

Uninstall all previously installed PyTorch binaries (and make sure no installs are detected e.g. via pip list | grep torch) and install the latest binary with CUDA 12.8 by pasting the install command to your terminal.

I don’t know which setup and virtual environment you are using so make sure you are using any terminal which supports and detects Python and which you use to execute scripts.

let’s take it as read that I don’t know how to do any of that, or to identify which binaries I have installed. I or which version I have.

is there a guide or tutorials available that tells me how to remove everything, all dependancies and binaries, so that I can start from scratch?

i think that it might be easier to start over than to troubleshoot, as you’re starting quite some way ahead of where I need to be.

Maybe I’m on the wrong forum for this.

To get started with Python itself these resources might be helpful as they explain the first steps for non-programmers. The PyTorch docs expect users to have some basic experience in Python. Let me know if this helps.

Share my experience on installing PyTorch for RTX5080 GPU on Windows OS here:
I installed below version of torch, torchvision and torchaudio in the environment of cuda 12.8, python 3.9:
First download below wheels:
torch-2.8.0.dev20250530+cu128-cp39-cp39-win_amd64.whl
torchvision-0.23.0.dev20250531+cu128-cp39-cp39-win_amd64.whl
torchaudio-2.8.0.dev20250531+cu128-cp39-cp39-win_amd64.whl

Then run pip install --force-reinstall torch-2.8.0.dev20250530+cu128-cp39-cp39-win_amd64.whl torchvision-0.23.0.dev20250531+cu128-cp39-cp39-win_amd64.whl torchaudio-2.8.0.dev20250531+cu128-cp39-cp39-win_amd64.whl

Note that I am using Windows, you can find more versions of torch here:
https://download.pytorch.org/whl/cu128/torch, as well for torchvision and torchaudio, be aware the dependency conflict and choose the right verison

For anyone who comes across this thread, @ptrblck’s solution did the trick. Here’s how I did it:

Uninstall Torch

  • pip uninstall torch

Install the libraries

At the time of this post, CUDA Toolkit 12.9 is the LTS version.

  • pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu129
1 Like

Have you been able to install/use llama ccp with gpu using that driver ?

But are you able to use llama ccp ? My compiles from source code allways get up to a 128 bit error i have not been able to solve.

I just compiled and ran llama.cpp from the sources. No problem so far.

Running on a Ubuntu 24.04
Intel i9
RTX 5090