Pytorch 2.0 Compile to native code

It seems pytorch has had the idea many times to convert models to IR representations. It looks like torch.fx kind of does this and now torch.compile too.

It would be really cool if pytorch converted models to LLVM IR, then used LLVM to compile to native code which could either be used as a static library, shared library or python module.

So workflow would be train → export to LLVM IR → LLVM compile → {model.h/libmodel.a, model.h/libmodel.so} with runtime dynamic linking to libcuda.so using dlopen

Then pytorch would essentially become its own language and be a proper compiler. This is inline with software 2.0 where machine learning just becomes another programming language. It emits libraries or binary which can be consumed by other software. Sounds great.