Libtorch code integration failed with error "version `GOMP_4.5' not found"

Hi,
We are currently deploying a Faster-RCNN model trained using Detectron2 in C++ code.
Our code use the following libraries :

find_package(Torch REQUIRED REQUIRED PATHS /home/pps/libtorch-cxx11-abi-shared-with-deps-1.8.1+cu111/libtorch)
find_package(TorchVision REQUIRED)
find_package( realsense2 REQUIRED )
find_package( OpenCV REQUIRED )
find_package( PCL REQUIRED )
find_package(Boost COMPONENTS program_options)

Unfortunately after compiling the .cpp code (without any problem); I get the following error message at execution :

./ADVP: /home/pps/libtorch-cxx11-abi-shared-with-deps-1.8.1+cu111/libtorch/lib/libgomp-75eea7e8.so.1: version `GOMP_4.5' not found (required by /usr/local/lib/libpcl_features.so.1.11)
./ADVP: /home/pps/libtorch-cxx11-abi-shared-with-deps-1.8.1+cu111/libtorch/lib/libgomp-75eea7e8.so.1: version `GOMP_4.5' not found (required by /usr/local/lib/libpcl_common.so.1.11)

Libtorch seems to interfer with the PCL Library.
Here is the CmakeFile I use to build my program :

cmake_minimum_required(VERSION 3.12 FATAL_ERROR)

# project name
project(ADVP)

# look for all package needed
find_package(Torch REQUIRED REQUIRED PATHS /home/pps/libtorch-cxx11-abi-shared-with-deps-1.8.1+cu111/libtorch)
find_package(TorchVision REQUIRED)
find_package( realsense2 REQUIRED )
find_package( OpenCV REQUIRED )
find_package( PCL REQUIRED )
find_package(Boost COMPONENTS program_options)

# RUNNING AND LEARNING CODE
add_executable(ADVP src/ADVP.cpp include/RS-CV_helper_functions.hpp include/RS-PCL_helper_functions.hpp include/CV_helper_functions.hpp include/CV-PCL_helper_functions.hpp)

# target libs
target_link_libraries(ADVP ${TORCH_LIBRARIES})
target_link_libraries(ADVP ${DEPENDENCIES} ${realsense2_LIBRARY})
target_link_libraries(ADVP ${DEPENDENCIES} ${OpenCV_LIBS})
target_link_libraries(ADVP ${DEPENDENCIES} ${PCL_LIBRARIES})

# target property
set_property(TARGET ADVP PROPERTY CXX_STANDARD 14)

I want to precise that my code work without the Torch library but when I want to integrate it I get this error.

Here is my environment configuration :

PyTorch version: 1.8.1+cu111
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.16.3

Python version: 3.8 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: 11.1.105
GPU models and configuration: GPU 0: GeForce GTX 1080
Nvidia driver version: 460.56
cuDNN version: Probably one of the following:
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.1.1
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.1.1
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.1.1
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.1.1
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.1.1
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.1.1
/usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.1.1
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] torch==1.8.1+cu111
[pip3] torchaudio==0.8.1
[pip3] torchvision==0.9.1+cu111
[conda] Could not collect

I use the current pre-compile version of libtorch for CUDA 11.1 Cxx11-ABI.

It’s been a week, and I tried everything to make it work.
Does anyone have an idea ?
Do I need to build Libtorch myself ?

Best regards,

Quentin.

You might be hitting this older issue which points towards an import issue for OpenMP.
However, based on the discussion it should have been solved. In any way, a source build should not suffer from this issue, as your local OpenMP lib should be used.

Hi, thanks for your answer.
I already came across this issue on git.
I will try to build the library myself ^^.

Best regards,

Quentin.

Hi!
I’m having the exact same issue. Was this resolved? It’s also not clear to me from the other thread what I should do in this case.