How to use AOTInductor output models in non-Python environments

I’m trying to follow AOTInductor: Ahead-Of-Time Compilation for Torch.Export-ed Models — PyTorch 2.5 documentation to export a model from Python and then deploy it in an environment that just uses C++.

If I follow the example, the model.so which is output from Python has dynamic linkage to the libtorch.so, etc., which come as part of pip install torch. I can repro the example if I let the C++ executable link to those same pip artifacts, but I would count this as needing Python in the end environment. I’ve tried linking to the shared libs from the equivalent version of LibTorch but have been unsuccessful.

What is the intended way of working around this dynamic linkage in a non-Python environment?

2 Likes

cc @desertfire for AOTInductor

It’s worth calling out though that libtorch.so is python free: as long as you have it installed, you can use it in a C++ only runtime (and dynamically link to it) without needing python in the environment.

1 Like

Is that the intended way of doing this then? (e.g. just pip install and then rip out the torch/ package dir for use elsewhere?) I noticed that the libtorch.so instances in the precompiled LibTorch 2.5.1+CPU and PyTorch 2.5.1+CPU (for example) are not equivalent, so I didn’t know if this could cause weird issues later

(Edit, tangential but for posterity: the pip installation comes with libs that use the pre-cxx11 ABI. Seems like building from source is the only way around this for now. More context in Status of pip wheels with _GLIBCXX_USE_CXX11_ABI=1 · Issue #51039 · pytorch/pytorch · GitHub)

I am also very confused by this, especially since the tutorial explicitly states to link against the libtorch library, which I assumed was C++ libtorch.


Does that mean we can use the full range of the C++ API without any need of Libtorch C++ ?