I try LibTorch in CPU.
version 1: use cxx11 ABI
cmake -DCMAKE_PREFIX_PATH=/tools/libtorch-cxx11-abi-shared-with-deps-1.13.0+cpu ..
cmake --build . --config Release
version 2: use Pre-cxx11 ABI
cmake -DCMAKE_PREFIX_PATH=/tools/libtorch-shared-with-deps-1.13.0+cpu ..
cmake --build . --config Release
Both can build successfully, but when to run the release executable program come same error:
./test-model: symbol lookup error: ./test-model: undefined symbol: _ZN2at4_ops4ones4callEN3c108ArrayRefINS2_6SymIntEEENS2_8optionalINS2_10ScalarTypeEEENS6_INS2_6LayoutEEENS6_INS2_6DeviceEEENS6_IbEE
However when switch the DCMAKE_PREFIX_PATH
to Pytorch install path, for example /opt/conda/lib/python3.8/site-packages/torch/share/cmake
, the release executable program can run successfully.
Therefore it seems the LibTorch download from official can not be directly use in C++? Why ?
My test C++ code snippet is simple below:
#include <torch/script.h>
#include <torch/torch.h>
#include <iostream>
#include <memory>
#include <numeric>
int main(int argc, const char* argv[]) {
if (argc != 4) {
std::cerr << "usage: test-model <path-to-exported-script-module>\n";
return -1;
}
Above example What should do if I want to use the official LibTorch lib ? Maybe I use wrong version for my CPU? By the way, my OS is “Ubuntu VERSION=20.04.4 LTS (Focal Fossa)” and CPU is AMD EPYC 7713.