Exporting libtorch model and loading in Python

Is it possible to save a nn::Module to file in libtorch, and then later load it in python for inference with Pytorch? I’ve seen several examples of the other way around, but never a complete description of the C++ --> model file --> Python direction that actually works. Thanks!

This does not seem to be something that is often requested.
I would probably wrap the model in a Python extension and use that. This is safely within what is supported.

Or you could try to trace the model and then save it with the JIT, but that would lose the model structure (I think), as the magic that lets you keep the model structure in Python is likely missing. I must admit that I didn’t try tracing in C++ and the last time I suggested to just do as if you would in Python (for the anomaly detection) and that it should be working, it would just segfault and I had to go and implement the feature just to avoid leaving a wrong answer in the forums for posterity. No guarantee from me that this works.

Best regards

Thomas

@tom thanks for the response. Could you please elaborate on your first suggestion (wrap the model in a Python extension and use that)? If it’s not too much trouble, could you maybe show some snippet of code? Thanks.

Well, you know how to run your model in C++, right?

Then you’d imitate the C++ extension tutorial but instead of implementing lltm you call your model.

You could either make the model a static global variable or pass it around as a shared ptr.

Best regards

Thomas

1 Like