Hi,
I can see a new module in development called torch.fx with symbolic trace capability
Could you please explain the difference between torch.fx and torch.jit.script()
Thanks & Regards
Hi,
I can see a new module in development called torch.fx with symbolic trace capability
Could you please explain the difference between torch.fx and torch.jit.script()
Thanks & Regards
Hello @mathmanu,
torch.fx
is different from TorchScript in that it is a platform for Python-to-Python transformations of PyTorch code. TorchScript, on the other hand, is more targeted at moving PyTorch programs outside of Python for deployment purposes. In this sense, FX and TorchScript are orthogonal to each other, and can even be composed with each other (e.g. transform PyTorch programs with FX, then subsequently export to TorchScript for deployment).
Please stay tuned for more information about FX early next year. Note that FX is very unstable at this point and we do not recommend building off of the code in master at this time
Could you please describe what does it mean?
Python code generation is what makes FX a Python-to-Python (or Module-to-Module) transformation toolkit. For each Graph IR, we can create valid Python code matching the Graph’s semantics. This functionality is wrapped up in
GraphModule
, which is atorch.nn.Module
instance that holds aGraph
as well as aforward
method generated from the Graph.
Why does one need to translate Python to Python? Is IR the same IR that we acquire during jit.trace
?
Hi @zetyquickly,
Could you please describe what does it mean?
This concept is introduced in the documentation: torch.fx — PyTorch 2.1 documentation. FX produces valid Python nn.Module
instances from its Graph
representation
Why does one need to translate Python to Python?
FX emphasizes generating Python code so that it can be used within the existing PyTorch eager ecosystem. That is to say, code transformed by FX is not locked into one specific runtime (e.g. TorchScript) and all the normal tooling that can be used with normal PyTorch modules can be used with FX-generated modules.
Is IR the same IR that we acquire during
jit.trace
?
No, the IR is not the same as that produced by jit.trace
, intentionally so. FX is an entirely separate system that is superior to jit.trace
in several ways:
jit.trace
often silently captures wrong representationstorch.nn
module calls are preserved rather than traced through). This is much easier to work with and understand and we have seen major productivity improvements using this IR