Triton installation and requires grad mismatch


on the torch.compile page, it says that torchtriton might need to be installed via pip install torchtriton --extra-index-url ""

I tried that, but when I wanted to compile my model, torch was complaining that I don’t have a backend installed and that I should check out the triton page of openAI, which I did. This disappeared after I installed the nightly version from openAIs Github.

Why is that? I thought the reason for torchtriton is that one is not dependent on the openAI version. Maybe I misunderstood.

After this, compilation seemed to work, but the model wasn’t faster. I think there are still some issues.
I am getting one of these

[2023-02-23 11:14:21,084] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64)
   function: '<graph break in forward>' (/X/simulator/tiles/
   reasons:  tensor 'x_input_scaled' requires_grad mismatch. expected requires_grad=0

But when I inspect the variable during training, requires grad is always set to zero. In general, what does this warning mean?

I am also getting a lot of these

torch/_functorch/ UserWarning: Your compiler for AOTAutograd is returning a a function that doesn't take boxed arguments. Please wrap it with functorch.compile.make_boxed_func or handle the boxed arguments yourself. See for rationale.

I looked at the issue and it seems that one must pass the args as lists so that the memory can be freed. Is that something that I have to worry about in the user-code?


The pytorch-triton binary will be installed as a dependency when the nightly binaries are installed. Also note that torchtriton is the name of the old binaries, which you shouldn’t install manually.
Unfortunately, it seems the nightly binaries are broken right now and you can track the issue here.

Are the issues I am seeing related to this, or is that just my code that isn’t good enough for compile to work?

The warnings you are seeing might be related to the usage of the old torchtriton binary and I would thus recommend installing the latest nightly and allow PyTorch to pull the needed pytorch-triton dependency.

Also, those warnings you’re seeing from AOT Autograd should be fixed by fix spurious aot autograd warning by bdhirsh · Pull Request #95521 · pytorch/pytorch · GitHub :slightly_smiling_face:

Ok, thanks. I will try it out soon.