We are using
torch.fx to find functional calls like
torch.add(x, x) and replace them with a call to a module.
This sometimes fails because
torch.fx.tracing cannot handle some cases.
torch.dynamo however seems to be much more robust in generating the
torch.fx.Graph then torch.fx.
Is there a path forward to integrate
torch.dynamo closer together?
I managed to get the
torch.fx.graph out of by saving the graph through a custom backend. However, it is not as usable as the
torch.fx.trace generated graph, because:
- in some cases one gets multiple graphs. It’s unclear how one needs to connect the multiple graphs, e.g. how does the output from one graph passed to another graph.
- the module names are rewritten. They all get names like
self_module_conv. I would like to maintain them.
fx is a representation and dynamo is a tracer so they already compose together, the fx tracer is unlikely to continue receiving support though
Also is this the tutorial you’re referring to? Google Colab - I think @SherlockNoMad explained to me how you can figure out which graph pipes to which other graph but I’m drawing a balnk
Hi @marksaroufim, thanks for the quick response
Yes, I saw that one and it’s also explained in:
There’s a few subtile difference that don’t allow me using it for my usecase when following the tutorial.
- torch.dynamo kind of flattens the module. It doesn’t seem no longer possible back to link it back to a specific module as far as I can tell. I need that to be able to further configure the model after it has been traced.
- the aformentioned multiple graphs. I think I understand the necessity, but an easy way (some way will already exists somewhere deep) to connect the outputs correctly
Do you happen to know if there’s a timeline/roadmap on that one can look at? Or what’s the best way to get active myself?
Probably the best way is to author a longer issue here Issues · pytorch/pytorch · GitHub describing your use case