Using both libtorch c++ and python API

Hey,

I’ve written a wrapper around a pytorch operator to use it in ORT. The reason being that one ORT operator is significantly slower than the pytorch one (see [Performance] Pytorch is faster than ONNX when running inference multiple times · Issue #14596 · microsoft/onnxruntime · GitHub). For this, I use the libtorch C++ API (just wrap the ORT tensors in a torch tensor with at::from_blob). I load this custom ORT kernel in python using a .so/.dll file. Before doing so, I also load the necessary libtorch DLLs with ctypes.cdll.LoadLibrary (alternatively I could set the DLL env but I found this approach easier).

This works fine, however, after importing the libtorh DLLs I can’t import torch anymore in python. Doing so with import torch simply results in a segfault. When I first import torch and then the torchlib DLLs I get the following error when loading my ORT costum operator:

onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Failed to load library /path/to/libORT.so with error: /path/to/libORT.so: undefined symbol: _ZN3c106detail14torchCheckFailEPKcS2_jRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE

The reason for this I believe is because of a version mismatch between the python install and pytorhlib. I install the python version with pip3 install torch==2.0.0+c118 -f https://download.pytorch.org/whl/torch_stable.html and download torchlib from https://download.pytorch.org/libtorch/cu118/libtorch-cxx11-abi-shared-with-deps-2.0.0%2Bcu118.zip. I thought this should be fine since they are the same version but maybe they are from different builds?

So I was wondering how I best deal with this. Would it be possible to use the DLLs in /usr/local/lib/python/dist_packages/torch/lib for my C++ wrapper. They seem to be the same as the ones from libtorch or will this cause issues? My goal is to be able to ship a version of the code with only the necessary pytorchlib for inference to keep the install size small but have the full pytorch python API for development and training.

Hi,

I’ve got the same issue, i’m wondering if you have found the solutions?
Any suggestions and help are appreciated :slight_smile:

Hey there,
I solved this problem by myself.

Actually, there is no need to download c++ libtorch if you have already downloaded pytorch using pip. Assume that you are using conda, libtorch.so will be automatically installed in “/root/miniconda3/envs/your_env/lib/python3.10/site-packages/torch/lib”, alongside with many other libraries that c++ can use. Using this libtorch directly in your c++ code and cmake will cause no version confliction when importing it to python by pybind11, cause they come from the same place. And pybind11 has no version issue, so any download method will be fine (git clone from source or pip).

Hope this could help anyone that have same issues. :slight_smile: