The nightly binaries ship with 11.8 and 12.1 already which would be the target for the next PyTorch 2.1.0 release. We will start bringing up 12.2 soon, but it would miss the 2.1.0 release. In the meantime you can build from source.
I will take a look at building PyTorch from sources. PyTorch is certainly a huge software in any sense of the word, but NVIDIA seems more complex to be built, as it has so many interactions with hardware.
OpenAI/Triton should not depend on any PyTorch CUDA libs, so I would not expect to see issues. However, I’m not familiar enough with Triton issues and their roadmap.
It’s still unclear what kind of issues you are seeing. Triton should use its own ptxas, shouldn’t it?
Which dependencies would be conflicting and what could fail? Again, I’m not the code owner of Triton, but expect to see no Triton->PyTorch dependencies for CUDA.