Building llama-cpp-python with CUDA support fails due to GLIBC version incompatibility

Hello,

Issue Summary

Building llama-cpp-python with CUDA support fails due to GLIBC version incompatibility

Environment Details

  • OS: Ubuntu Linux
  • GPU: NVIDIA (Driver 570.158.01, CUDA 12.8)
  • Python Environment: Miniconda3 with Python 3.13
  • PyTorch: 2.8.0.dev20250622+cu128
  • Compiler: conda’s x86_64-conda-linux-gnu-c++ (GCC 11.2.0)

/home/oba/miniconda3/bin/../lib/gcc/x86_64-conda-linux-gnu/11.2.0/../../../../x86_64-conda-linux-gnu/bin/ld: /usr/local/cuda/lib64/libcublasLt.so.12: undefined reference to log2f@GLIBC_2.27' /home/oba/miniconda3/bin/../lib/gcc/x86_64-conda-linux-gnu/11.2.0/../../../../x86_64-conda-linux-gnu/bin/ld: /usr/local/cuda/lib64/libcublasLt.so.12: undefined reference to __cxa_thread_atexit_impl@GLIBC_2.18’
collect2: error: ld returned 1 exit status

Root Cause

The CUDA library libcublasLt.so.12 requires GLIBC symbols:

  • log2f@GLIBC_2.27 (from GLIBC 2.27+)
  • __cxa_thread_atexit_impl@GLIBC_2.18 (from GLIBC 2.18+)

But the system apparently has an older GLIBC version that doesn’t provide these symbols.

Build Command Used CMAKE_ARGS=“-DGGML_CUDA=on -DCMAKE_CUDA_ARCHITECTURES=native” pip install -e . --verbose

What’s Been Tried

  1. Using system CUDA instead of conda CUDA - same error
  2. Both conda CUDA libraries and system CUDA libraries show the same GLIBC dependency issue
  3. The error occurs specifically when linking vision tools (llava, mtmd) that depend on CUDA libraries

Questions for Forum

  1. How to resolve GLIBC version conflicts when building llama-cpp-python with CUDA?
  2. Is there a way to use older/compatible CUDA libraries that don’t require GLIBC 2.27+?
  3. Can the build be configured to skip problematic vision components while keeping core CUDA functionality?
  4. Should I use system compiler instead of conda compiler, or create a different conda environment?

Additional Context

  • The build progresses successfully until the final linking stage for vision tools
  • Core CUDA libraries (libggml-cuda.so) appear to build successfully
  • Only fails when linking final executables that use cuBLAS

This gives forum responders all the technical details they need to help diagnose the specific GLIBC/CUDA library compatibility issue.

regards,