Using both libtorch c++ and python API


I’ve written a wrapper around a pytorch operator to use it in ORT. The reason being that one ORT operator is significantly slower than the pytorch one (see [Performance] Pytorch is faster than ONNX when running inference multiple times · Issue #14596 · microsoft/onnxruntime · GitHub). For this, I use the libtorch C++ API (just wrap the ORT tensors in a torch tensor with at::from_blob). I load this custom ORT kernel in python using a .so/.dll file. Before doing so, I also load the necessary libtorch DLLs with ctypes.cdll.LoadLibrary (alternatively I could set the DLL env but I found this approach easier).

This works fine, however, after importing the libtorh DLLs I can’t import torch anymore in python. Doing so with import torch simply results in a segfault. When I first import torch and then the torchlib DLLs I get the following error when loading my ORT costum operator:

onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Failed to load library /path/to/ with error: /path/to/ undefined symbol: _ZN3c106detail14torchCheckFailEPKcS2_jRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE

The reason for this I believe is because of a version mismatch between the python install and pytorhlib. I install the python version with pip3 install torch==2.0.0+c118 -f and download torchlib from I thought this should be fine since they are the same version but maybe they are from different builds?

So I was wondering how I best deal with this. Would it be possible to use the DLLs in /usr/local/lib/python/dist_packages/torch/lib for my C++ wrapper. They seem to be the same as the ones from libtorch or will this cause issues? My goal is to be able to ship a version of the code with only the necessary pytorchlib for inference to keep the install size small but have the full pytorch python API for development and training.