Hello,
I was following the basic tutorial for loading an exported model in Torchscript (Loading a TorchScript Model in C++ — PyTorch Tutorials 1.8.1+cu102 documentation)
but after I run either
cmake -DCMAKE_PREFIX_PATH="$(python -c 'import torch.utils; print(torch.utils.cmake_prefix_path)')" ..
or
cmake -DCMAKE_PREFIX_PATH=/path/to/libtorch ..
and make
and try running the executable, I get the error
./aanet: symbol lookup error: ./aanet: undefined symbol: _ZN5torch3jit4loadERKSsN3c108optionalINS3_6DeviceEEERSt13unordered_mapISsSsSt4hashISsESt8equal_toISsESaISt4pairIS1_SsEEE
(I’m on Ubuntu 20.04, x86_64, and tried with both pytorch 1.7 and 1.8)
I’ve tried some suggestions I found in previous threads like
- Creating a brand new conda environment and installing pytorch alone
- Adding to CMake
set(CMAKE_CXX_FLAGS "-D_GLIBCXX_USE_CXX11_ABI=0")
- Using the pre-cxx11 ABI libtorch
These are my files
CMakeLists.txt
cmake_minimum_required(VERSION 3.0)
project(aanet)
# add_definitions(-D_GLIBCXX_USE_CXX11_ABI=0)
set(CMAKE_CXX_FLAGS "-D_GLIBCXX_USE_CXX11_ABI=0")
find_package(Torch REQUIRED)
add_executable(${PROJECT_NAME} main.cpp)
target_link_libraries(${PROJECT_NAME} "${TORCH_LIBRARIES}")
set_property(TARGET ${PROJECT_NAME} PROPERTY CXX_STANDARD 14)
main.cpp
#include <torch/script.h> // One-stop header.
#include <iostream>
#include <memory>
int main(int argc, const char *argv[])
{
if (argc != 2)
{
std::cerr << "usage: example-app <path-to-exported-script-module>\n";
return -1;
}
torch::jit::script::Module module;
try
{
// Deserialize the ScriptModule from a file using torch::jit::load().
module = torch::jit::load(argv[1]);
}
catch (const c10::Error &e)
{
std::cerr << "error loading the model\n";
return -1;
}
std::cout << "ok\n";
}
I’m completely lost on how to debug this problem (without resorting to a full reset) as I’ve no issues compiling and running this simple example on my other devices.
Any help would be greatly appreciated. Thanks!