This is just to clarify my understanding of JIT.
When we talk about PyTorch JIT - is the translator that generates TorchScript IR a JIT itself ? And when we execute the TorchScript (using libtorch C++ library) is there an another JIT that is interpreting TorchScript IR and generating code that calls libtorch APIs and dispatches the calls ? If so, should we expect this online JIT to have a runtime overhead in terms of cpu cycles and memory ?
No, IR generation is a fully ahead-of-time process. JIT compilation is used to recover information that is potentially dynamic, such as tensor shapes, that can be used to make better optimization decisions during runtime. This all happens entirely after IR generation, though. Please let us know if you have further questions.
Where can I find the entry point function (in source code) for JIT ? And which function has the code to jump to the JIT’d (optimized) code ? Basically want to step thru with a debugger.
Thanks a bunch,
This is true for 1.5, but isn’t true on master, and won’t be true for the upcoming release.