Compiling triton with .github/scripts/

Hi community, I have built pt2.0 with gpu. The code I used is from github repo.
I used python install and when run python -m torch.utils.collect_env I cannot find triton.
Hence I tried to build triton with .github/scripts/
However during the process it is downloading cuda 12.0.
I know from previous post that cuda 11.7 is prefered, why is there a difference on cuda version for code in the same repo?

The build works for me and creates a wheel without downloading any CUDA version.

Hi Sir ptrblck.

  1. is it possible to share how you build triton in pytorch directory? I want to double check to make sure my build cmd is legit.
  2. I thought build pytorch with python develop will enable the build of triton by default, but it turned out not the case, I am wondering how to make sure triton is enabled in my pytorch build.
  3. I used pytorch 2.0-rc2 and the pined triton commit is d54c04abe2c3e67b2139c68cdbda87b59e8dd01b I think there do exist a different cuda requirement since pytorch 2.0 stables at 11.7 and it seems that triton downloads 12.0.
  1. I used python .github/scripts/ --py-version 3.8 which created pytorch_triton-2.0.0+b8b470bc59-cp38-cp38-linux_x86_64.whl and which matches the pinned commit in .github/ci_commit_pins/triton.txt.
  2. I think you are correct and you would need to build pytorch-triton as a dependency as seen in my command.
  3. Yes, RC2 used d54c04abe2c3e67b2139c68cdbda87b59e8dd01b which was updated to b8b470bc59 recently. I don’t think Triton should and is downloading any CUDA version, but ships with ptxas and its own libdevice.10.bc library. I also did not see any CUDA download during my build and would not even know how it could install a full CUDA toolkit.
1 Like