Can we benefit from fx.graph in different language (different framework)?

Hi,

I’m working in new NPU vendor (https://furiosa.ai) , and looking for the right way to integrate pytorch framework with our chip.

We’ve found torchdynamo (TorchDynamo and TorchInductor Tutorial — PyTorch Tutorials 1.13.0+cu117 documentation) and this way looks very promising to provide our accelerator via pytorch. We did some experiments to implement torchdynamo backend via ONNX export as our compiler stack have been used ONNX as a one of the major input format first. (fx.graph → onnx → our compiler codegen function)

Now, with a bit more better understanding of fx.graph, we’re further investigating feasible ways to benefit from powerful fx.graph to optimize graph before passing the unoptimized ONNX to our compiler. (e.g. we could split a graph into a subgraph which our compiler can accelerate, and the other so that simplifying compiler & runtime implementation)

One question from the work above is that “can we further optimize the graph splitting workflow in torchdynamo which previously our compiler have been done?”. We’ve found that early graph transformation for optimization looks similar with each other, and our compiler have been done on the optimization for onnx level. If we can do this in torchdynamo backend side with fx.graph, it’s more flexible and scalable we think.

In this way, we’ve encountered practical question about the implementation as our compiler is based on Rust lang and we want to transform the graph (whatever the graph is like fx.graph or onnx) in Rust side. I can understand fx.graph itself originally came from python program capturing so it may be a bit weird to think that we want to use it in the different language, but I want to leave this question as we’re not very familiar with pytorch ecosystem.

Is there any viable way to benefit from fx.graph from different language (or different framework, implementation, etc)? If not, what’s the right way to use for vendor who want to pre optimize graphs before compilation?

Thanks.