Reading the available documentation and a bit of the code in torch/_export/
I couldn’t really figure out whether AOTInductor supports autograd, and if not, whether support is in the roadmap. It would be very useful to have this for models that rely on autograd for inference. Can I find more info about it somewhere? Is there a way to export a model’s backward()
as a *.so
library on linux?
This is something we can support in principle, although I’m not sure of the timeline (if I remember correctly, one extra bit of work needed to get training support for AOTInductor is that it will need to handle inplace mutations to the parameters).
Can you file a github issue? Sign in to GitHub · GitHub