Building Pytorch for Conda from source with GLIBCXX_USE_CXX11_ABI=1

Before I begin, I’ve checked the topics similar to this one and they all have linking issues and PR’s that are outdated. I am attempting to compile the libtorch binaries and build pytorch from source with GLIBCXX_USE_CXX11_ABI=1

The reason for this, I am trying to build Open3d with ML bundle functionality including both Pytorch Ops and Tensorflow Ops. The status of Open3d support for CX11_ABI is it supports compilation with this setting. There are still open issues surrounding it. For example: https://github.com/isl-org/Open3D/pull/6288

I have been using the conda builder to build with the setting GLIBCXX_USE_CXX11_ABI=1

During the build I see the CXX Flag is succesfully set:

-- 
-- ******** Summary ********
--   CMake version                     : 3.26.4
--   CMake command                     : $BUILD_PREFIX/bin/cmake
--   System                            : Linux
--   C++ compiler                      : /opt/rh/devtoolset-9/root/usr/bin/c++
--   C++ compiler version              : 9.3.1
--   CXX flags                         :  -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -Wnon-virtual-dtor
--   Build type                        : Release
--   Compile definitions               : ONNX_ML=1;ONNXIFI_ENABLE_EXT=1;__STDC_FORMAT_MACROS
--   CMAKE_PREFIX_PATH                 : $PREFIX/lib/python3.10/site-packages;$PREFIX;/usr/local/cuda-12.1
--   CMAKE_INSTALL_PREFIX              : $SRC_DIR/torch
--   CMAKE_MODULE_PATH                 : $SRC_DIR/cmake/Modules;$SRC_DIR/cmake/public/../Modules_CUDA_fix
-- 
--   ONNX version                      : 1.15.0rc2
--   ONNX NAMESPACE                    : onnx_torch
--   ONNX_USE_LITE_PROTO               : OFF
--   USE_PROTOBUF_SHARED_LIBS          : OFF
--   Protobuf_USE_STATIC_LIBS          : ON
--   ONNX_DISABLE_EXCEPTIONS           : OFF
--   ONNX_DISABLE_STATIC_REGISTRATION  : OFF
--   ONNX_WERROR                       : OFF
--   ONNX_BUILD_TESTS                  : OFF
--   ONNX_BUILD_BENCHMARKS             : OFF
--   ONNX_BUILD_SHARED_LIBS            : 
--   BUILD_SHARED_LIBS                 : OFF
-- 
--   Protobuf compiler                 : 
--   Protobuf includes                 : 
--   Protobuf libraries                : 
--   BUILD_ONNX_PYTHON                 : OFF
-- 

The build completes successfully and I am able to install the conda package in a blank py310 environment with required dependencies. The problem is that when I check for the cxx abi setting, it is False:

❯ python
Python 3.10.13 | packaged by conda-forge | (main, Oct 26 2023, 18:07:37) [GCC 12.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch._C._GLIBCXX_USE_CXX11_ABI)
False

Anybody happen to know what I am not understanding?

I know you probably did, but I have to ask: Did you double-check that the PyTorch you’re running is the one you actually built?
When I’m building my PyTorch with setup.py bdist_wheel and then install the wheel, I’m getting C++11-ABI packages and I cannot remember that being different.

Best regards

Thomas

I have created a clean conda environment, using conda create -n test python=3.10

I then install the pytorch package from my local build output directory along with the necessary channels for dependencies. Here is the output of this command:

 conda install -c file:///home/gmacmillan/projects/tmp/out_py3.10_cuda12.1_cudnn8.9.2_1_20231127 pytorch -c pytorch -c numba/label/dev -c pytorch-nightly -c nvidia
Channels:
 - file:///home/gmacmillan/projects/tmp/out_py3.10_cuda12.1_cudnn8.9.2_1_20231127
 - pytorch
 - numba/label/dev
 - pytorch-nightly
 - nvidia
 - conda-forge
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: done

## Package Plan ##

  environment location: /home/gmacmillan/mambaforge/envs/build-open3d

  added / updated specs:
    - pytorch


The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    pytorch-1.13.1             |py3.10_cuda12.1_cudnn8.9.2_1        1.33 GB  file:///home/gmacmillan/projects/tmp/out_py3.10_cuda12.1_cudnn8.9.2_1_20231127
    torchtriton-2.1.0+6e4932cda8|            py310       121.3 MB  pytorch-nightly
    ------------------------------------------------------------
                                           Total:        1.45 GB

The following NEW packages will be INSTALLED:

  blas               conda-forge/linux-64::blas-2.116-mkl 
  blas-devel         conda-forge/linux-64::blas-devel-3.9.0-16_linux64_mkl 
  cuda-cudart        nvidia/linux-64::cuda-cudart-12.1.105-0 
  cuda-cupti         nvidia/linux-64::cuda-cupti-12.1.105-0 
  cuda-libraries     nvidia/linux-64::cuda-libraries-12.1.0-0 
  cuda-nvrtc         nvidia/linux-64::cuda-nvrtc-12.1.105-0 
  cuda-nvtx          nvidia/linux-64::cuda-nvtx-12.1.105-0 
  cuda-opencl        nvidia/linux-64::cuda-opencl-12.3.101-0 
  cuda-runtime       nvidia/linux-64::cuda-runtime-12.1.0-0 
  filelock           conda-forge/noarch::filelock-3.13.1-pyhd8ed1ab_0 
  gmp                conda-forge/linux-64::gmp-6.3.0-h59595ed_0 
  gmpy2              conda-forge/linux-64::gmpy2-2.1.2-py310h3ec546c_1 
  icu                conda-forge/linux-64::icu-73.2-h59595ed_0 
  jinja2             conda-forge/noarch::jinja2-3.1.2-pyhd8ed1ab_1 
  libblas            conda-forge/linux-64::libblas-3.9.0-16_linux64_mkl 
  libcblas           conda-forge/linux-64::libcblas-3.9.0-16_linux64_mkl 
  libcublas          nvidia/linux-64::libcublas-12.1.0.26-0 
  libcufft           nvidia/linux-64::libcufft-11.0.2.4-0 
  libcufile          nvidia/linux-64::libcufile-1.8.1.2-0 
  libcurand          nvidia/linux-64::libcurand-10.3.4.101-0 
  libcusolver        nvidia/linux-64::libcusolver-11.4.4.55-0 
  libcusparse        nvidia/linux-64::libcusparse-12.0.2.55-0 
  libgfortran-ng     conda-forge/linux-64::libgfortran-ng-13.2.0-h69a702a_3 
  libgfortran5       conda-forge/linux-64::libgfortran5-13.2.0-ha4646dd_3 
  libhwloc           conda-forge/linux-64::libhwloc-2.9.3-default_h554bfaf_1009 
  libiconv           conda-forge/linux-64::libiconv-1.17-h166bdaf_0 
  liblapack          conda-forge/linux-64::liblapack-3.9.0-16_linux64_mkl 
  liblapacke         conda-forge/linux-64::liblapacke-3.9.0-16_linux64_mkl 
  libnpp             nvidia/linux-64::libnpp-12.0.2.50-0 
  libnvjitlink       nvidia/linux-64::libnvjitlink-12.1.105-0 
  libnvjpeg          nvidia/linux-64::libnvjpeg-12.1.1.14-0 
  libstdcxx-ng       conda-forge/linux-64::libstdcxx-ng-13.2.0-h7e041cc_3 
  libxml2            conda-forge/linux-64::libxml2-2.11.6-h232c23b_0 
  llvm-openmp        conda-forge/linux-64::llvm-openmp-15.0.7-h0cdce71_0 
  markupsafe         conda-forge/linux-64::markupsafe-2.1.3-py310h2372a71_1 
  mkl                conda-forge/linux-64::mkl-2022.1.0-h84fe81f_915 
  mkl-devel          conda-forge/linux-64::mkl-devel-2022.1.0-ha770c72_916 
  mkl-include        conda-forge/linux-64::mkl-include-2022.1.0-h84fe81f_915 
  mpc                conda-forge/linux-64::mpc-1.3.1-hfe3b2da_0 
  mpfr               conda-forge/linux-64::mpfr-4.2.1-h9458935_0 
  mpmath             conda-forge/noarch::mpmath-1.3.0-pyhd8ed1ab_0 
  networkx           conda-forge/noarch::networkx-3.2.1-pyhd8ed1ab_0 
  python_abi         conda-forge/linux-64::python_abi-3.10-4_cp310 
  pytorch            out_py3.10_cuda12.1_cudnn8.9.2_1_20231127/linux-64::pytorch-1.13.1-py3.10_cuda12.1_cudnn8.9.2_1 
  pytorch-cuda       pytorch/linux-64::pytorch-cuda-12.1-ha16c6d3_5 
  pytorch-mutex      pytorch/noarch::pytorch-mutex-1.0-cuda 
  pyyaml             conda-forge/linux-64::pyyaml-6.0.1-py310h2372a71_1 
  sympy              conda-forge/noarch::sympy-1.12-pypyh9d50eac_103 
  tbb                conda-forge/linux-64::tbb-2021.10.0-h00ab1b0_2 
  torchtriton        pytorch-nightly/linux-64::torchtriton-2.1.0+6e4932cda8-py310 
  typing_extensions  conda-forge/noarch::typing_extensions-4.8.0-pyha770c72_0 
  yaml               conda-forge/linux-64::yaml-0.2.5-h7f98852_2 

The following packages will be DOWNGRADED:

  _openmp_mutex                                   4.5-2_gnu --> 4.5-2_kmp_llvm 


Proceed ([y]/n)? y


Downloading and Extracting Packages:
                                                                                                                                                                                              
Preparing transaction: done                                                                                                                                                                   
Verifying transaction: done
Executing transaction: done

Finally I execute the ABI test command:

❯ python -c "import torch; print(torch._C._GLIBCXX_USE_CXX11_ABI)"
False