We have developed a NN JIT compiler, which is capable to compile a whole deep learning model. And now we want to add some TorchScript style JIT support transparently, like “pytorch/glow/torch_glow” did, so that user could just do a import my_jit, and then all jit stuff will be forward to our backend, pytorch will only act as an frontend that output a TorchScript Graph.
When reading torch_glow’s code, I found it hard to understand how should it work, so I was wondering if there was some document/tutorial about how to impl a custom JIT backend?
So it seems we need to create a dummy_op and a custom post pass that fuse everything in the TorchGraph into this dummy_op , and then, we could do the jit stuff during the dispatch on dummy_op? Is this right?
I also notice that there is a torch::jit::registerPrePass, is it fine to do the fusing at there too? For we also have an IR optimize system, and we might want to get the raw graph IR. Could this fusing in PrePass
cause any issue?
Another question now is that glow also registerd a new backend torch::jit::backend<TorchGlowBackend>("glow") and a preprocess of this backend torch::jit::backend_preprocess_register("glow", preprocess);. And it seems this backend isn’t used during the transparent JIT, so when should I also implement such a backend, or what is the usecase of the torch::jit::backend?
This is approximately what I had in mind. (And I do have a branch somewhere that I forget about that tries to make more of the JIT accessible form Python… Sigh.)
There are some passes that you would want, but it probably is OK to register a pre pass, too, if you prefer the IR at that level.
Now that you mention it, using backend probably is the right way to go. It is a relatively new part of the JIT (from 2020, in particular newer than the tutorial you linked).