From libtorch to ONNX

There is a way to load an ONNX or traced model in libtorch, but is there an option to go in the opposite direction? I would like to export a model defined and trained using C++ interface to ONNX format. If there is no such a feature, what is the exact reason for that. How difficult would it be to implement?

Hi, Did you have any idea?

As far as I know, quite a bit of the ONNX export is implemented in Python.

So the two main options likely are:

  • Save the weights in C++, rebuild the module in Python, load the weights and export. Totally straightforward but a bit of effort.
  • You can export a TorchScript module to ONNX, too. So if the module is traceable, you should be able to do that to get a TorchScript module in C++. Then you can load that into Python and export. Given that ONNX export mostly does tracing internally, too, that might be a viable alternative that skips the “implement the model in Python” step.

Best regards

Thomas

Thank you.
" * You can export a TorchScript module to ONNX, too. So if the module is traceable, you should be able to do that to get a TorchScript module in C++. Then you can load that into Python and export. Given that ONNX export mostly does tracing internally, too, that might be a viable alternative that skips the “implement the model in Python” step."

This line: “you should be able to do that to get a TorchScript module in C++”, But now, My net is torch::nn::Moudle, how to convert it to torch::jit::Moudule(which is ScriptModule in python) just like the torch.jit.script does?

So in principle, tracing (as in torch.jit.trace) should work in C++ as well as in Python, but I must admit I didn’t try. (And when I said, anomaly detection should work in C++, I felt the need to implement it just to not be handing out bad advice too often.)

Maybe the easiest is to create that Python model after all.

Best regards

Thomas