Can I ask for feedback on a custom-built PyTorch (CUDA 12.8 / RTX 5070 Ti)?

Hi, I built PyTorch manually from source using CUDA 12.8 and cuDNN 9.8 for compatibility with my RTX 5070 Ti.

I’d like to ask if it’s okay to share my custom .whl file and ask others (including PyTorch team) to test if it works well with SM120 architecture. I’m checking if my build is valid and stable enough.

Is it appropriate to post here for that kind of verification?

Thank you for your help!

gradiuse/custom-pytorch-2.6.0a0: Custom-built PyTorch 2.6.0a0 for Windows 11 with CUDA 12.8 and cuDNN 9.8 - Successfully passed advanced stability tests - Supports: torch.compile, TF32, CUDA AMP - Status: Stable build (internal)

Why don’t you use the nightly binary with CUDA 12.8 instead?

Pytorch for gptsovits v3 doesn’t support graphics cards, so I made it before I knew the nightly build information. Thanks for letting me know. I’d like to write that I’ll download it right away and try it