Windows support timeline for torch.compile

Is there a rough timeline for when/if we can expect Windows to be supported for torch.compile and torch._dynamo.optimize?

A cursory search didn’t yield anything so posting it here.

1 Like

torch.compile will work, it’s just that inductor the default compiler which leverages triton under the hood does not.

You can +1 this if you like Is there a plan to support Windows? · Issue #1640 · openai/triton · GitHub

@marksaroufim I didn’t understand. Can you please elaborate ?

A simplified view of torch.compile on GPU is that it takes your python code and generates openai/triton code (we call the compiler that does this transformation inductor) and because triton does not support Windows then it’s not obvious how torch.compile will support Windows GPU

However supporting Windows on CPU should be fine and there are other backends you can pass in to torch.compile(backend="example_backend") that might have better Windows support

Thanks for a detailed reply. From what I understand, you are saying that torch.compile should work on CPU on windows. Am I correct in my understanding. I am trying to do torch.compile on CPU only