Hello, I’m testing my shared library(.so file) into other commercial program.
Problem is the commercial program shut down when it loads torch::jit type NN model.
The program gives me unbelivable out of memory size error. For example, 14032033177600byte(14032GB).
Definitely, my shared library successfully loads NN model and runs the entire process when it is exported from a simple C++ executable file.
Surely, I inquired the problem to the commercial program’s engineer, and the answer is the commercial program is able to load pytorch model in the case of compilation using ABI shared libtorch version. Sadly, my server’s ABI version is too low to reproduce the model loading stage.
These are my server’s ABI information.
- GLIBCXX: 3.4 ~ 3.4.19
- CXXABI: 1.3 ~ 1.3.7
- GLIBC: 2.14, 2.2.5, 2.3, 2.3.2, 2.4
To sum up, I compiled my shared library using torch-1.8.1 version and it runs well in the simple executable program, but it failed in the commercial program. I tried to compile my shared library using libtorch_1.8.1_ABI, but it shows library dependency not found error(GLIBC_2.18 not found).
Do anyone have idea what is the main reason of model running problem?