I have a 5070TI, and I’m actually tired of trying to build libraries from source code, downloading a new nightly version every evening, and testing it with many other frameworks. I was hoping that version 2.9.0 stable would add this support, but unfortunately, it hasn’t. I’m also still getting endless errors from all the other libraries at the same time. In this context, I would really like to know when full support for 50xx series graphics cards will be available.
PyTorch 2.7.0 already added Blackwell support on our PyTorch wheels built with CUDA 12.8 and nightly binaries were available even beforehand. There is no need to source build PyTorch and you can install any of our recent releases with CUDA 12.8+.
Yeah, you were right. I installed PyTorch Stable 2.9.0 (with CUDA 12.8), and the GPU was successfully detected
However, when I tried to test XTTS, a new problem arose. It requires torchcodec, but as far as I understand, its latest version, torchcodec 0.7, is not compatible with PyTorch 2.9.0
Do I need to install version 2.8.0?
The latest torchcodec release is 0.8 as seen here which is compatible with the latest torch==2.9.0 release as described here.
Now I see. It turns out that torchcodec was updated to 0.8 yesterday, but I saw the repository before the updated version 0.8. I will try the new version of torchcodec now.
Edited: Yes. It’s really working now. All I had to do was wait for the torchcodec repository to be updated. Thanks for your help!
That’s great to hear! Let us know in case you have any other questions or unexpected issues.
I would recommend to be extremely careful when downloading unverified binaries from unknown sources for obvious reasons. Our nightly binaries already support all Blackwell architectures.
Hello,
I have the same issue as described below, but on Fdora 43 I got this in a docker env. :
NVIDIA GeForce RTX 5070 Ti with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5070 Ti GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(
NVIDIA GeForce RTX 5070 Ti
torch.cuda.get_arch_list()
[‘sm_50’, ‘sm_60’, ‘sm_61’, ‘sm_70’, ‘sm_75’, ‘sm_80’, ‘sm_86’, ‘sm_90’]
and thus cause the error below
RuntimeError: CUDA error: no kernel image is available for execution on the device
after that i went back to 12.4 and getting :
CUDA: True 12.4
/opt/real-esrgan/realesrgan/utils.py:63: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data
which will execute arbitrary code during unpickling (See pytorch/SECURITY.md at main · pytorch/pytorch · GitHub for more details). In a future release, the default value for `weights_only` will be flipped to `True`.
This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We
recommend you start setting `weights_only=True` for any use case where you don’t have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
loadnet = torch.load(model_path, map_location=torch.device(‘cpu’))
/usr/local/lib/python3.10/dist-packages/torch/cuda/_init_.py:230: UserWarning:
NVIDIA GeForce RTX 5070 with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5070 GPU with PyTorch, please check the instructions at same link as above …/locally
warnings.warn(
Real-ESRGAN initialized OK on GPU
So, for linux today there are only ?
RTX 5070 + PyTorch cu124/cu126 = FP32 only
RTX 5070 + PyTorch cu128 stable = NOT AVAILABLE YET
No, that’s not the case since our stable releases are shipping with CUDA 12.8 since PyTorch 2.7.0 which was released in April 2025.