As the latest version of PyTorch has that api dynamo_export, and it seems designed to export ONNX model from dynamo, so I have 2 questions below:
-
- What’s the difference between
torch.onnx.export
and torch.onnx.dynamo_export
? They both process normal nn.Modules
and then generate a ONNX model.
-
- I even can not export a resnet50 from
mmpretrain
through dynamo_export
, and I don’t find any document for it, so is it in early stage?
They differ in how you graph capture. Regular export uses JIT and Dynamo export uses Dynamo
Dynamo export is still in early stages but will likely become the default in the future given that torchscript development was paused
Thanks mark!
Btw, as this dyamo_export
is taking a normal nn.Module
in and outputs an ONNX model, can we make a api that enable us to put a traced fx.Graph
into it then output an ONNX model?
Since I found that PyTorch is also working on another quantization tool pt2e
, so this api can be usefule that we quantize, optimize models in fx.Graph format, then export to ONNX model for serving.
cc @BowenBao who is maybe already looking into this