'GLIBCXX_3.4.30' not found

Hi,
I have installed the latest torch version in a fresh environment and get the following error when executing python run_glue.py:

/usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /home/user/.conda/envs/env/lib/python3.8/site-packages/torch/lib/libtorch_python.so)

I already have tried to to install:

  • conda install libgcc
  • conda install -c conda-forge libstdcxx-ng=12
  • conda install -c conda-forge gcc=12.1.0

strings ~/.conda/envs/env/lib/libstdc++.so.6 | grep ‘GLIBCXX_3’

GLIBCXX_3.4
GLIBCXX_3.4.1
GLIBCXX_3.4.2
GLIBCXX_3.4.3
GLIBCXX_3.4.4
GLIBCXX_3.4.5
GLIBCXX_3.4.6
GLIBCXX_3.4.7
GLIBCXX_3.4.8
GLIBCXX_3.4.9
GLIBCXX_3.4.10
GLIBCXX_3.4.11
GLIBCXX_3.4.12
GLIBCXX_3.4.13
GLIBCXX_3.4.14
GLIBCXX_3.4.15
GLIBCXX_3.4.16
GLIBCXX_3.4.17
GLIBCXX_3.4.18
GLIBCXX_3.4.19
GLIBCXX_3.4.20
GLIBCXX_3.4.21
GLIBCXX_3.4.22
GLIBCXX_3.4.23
GLIBCXX_3.4.24
GLIBCXX_3.4.25
GLIBCXX_3.4.26
GLIBCXX_3.4.27
GLIBCXX_3.4.28
GLIBCXX_3.4.29
GLIBCXX_3.4.30

Any ideas how to fix this?

How did you install PyTorch? Using the official install commands or from conda-forge? I’ve seen these GLIBC incompatibilities in coda-forge before as it seems some of the libs use the latest standards.

I used this command:
conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia

Could you post a minimal and executable code snippet to reproduce the issue or is it already failing during the import?
I’ve created a new and clean conda environment and cannot reproduce any issues:

>>> import torch
>>> torch.randn(1).cuda()
tensor([0.2782], device='cuda:0')                                                                                                                                                                                  
>>> torch.__version__
'2.2.0'                                                                                                                                                                                                            
>>> torch.version.cuda
'11.8'   

installed via:

conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia

Do you mind sharing conda list and env when you are executing this command?

I reinstalled using the command above and it automatically selects conda-forge and CPU, what I don’t want:

    pytorch-2.1.0              | cpu_generic_py38had2c7df_0        71.6 MB  conda-forge
    torchvision-0.16.1       | cpu_py38h901811f_2         9.7 MB  conda-forge

I get this conda warning, when installing any conda package:

Collecting package metadata (current_repodata.json): WARNING conda.models.version:get_matcher(542): Using .* with relational operator is superfluous and deprecated and will be removed in a future version of conda. Your spec was 1.7.1.*, but conda is ignoring the .* and treating it as 1.7.1
done

My conda env might be restricted by the admin.

Then, I changed the install command:

conda install pytorch==2.1.1 torchvision==0.16.1  pytorch-cuda=11.8 -c pytorch -c nvidia
The following packages will be UPDATED:

  pytorch            conda-forge::pytorch-2.1.0-cpu_generi~ --> pytorch::pytorch-2.1.1-py3.8_cuda11.8_cudnn8.7.0_0 None

The following packages will be SUPERSEDED by a higher-priority channel:

  torchvision        conda-forge::torchvision-0.16.1-cpu_p~ --> pytorch::torchvision-0.16.1-py38_cu118 None

The following packages will be DOWNGRADED:

  _openmp_mutex                                   4.5-2_gnu --> 4.5-2_kmp_llvm None
  libblas                         3.9.0-21_linux64_openblas --> 3.9.0-16_linux64_mkl None
  libcblas                        3.9.0-21_linux64_openblas --> 3.9.0-16_linux64_mkl None
  liblapack                       3.9.0-21_linux64_openblas --> 3.9.0-16_linux64_mkl None

Following your instructions:

>>> import torch
torch.randn(1).cuda()
>>> torch.randn(1).cuda()
tensor([0.5221], device='cuda:0')
>>> torch.__version__
'2.1.1'
>>> torch.version.cuda
'11.8'

I also started my script. And the glib error is gone :slight_smile:

Great! So for some reason the conda-forge packages were indeed installed. :confused:
Note that we do not maintain them but I’ve seen similar compatibility issues in the past when trying to mix packages from conda-forge with other libs.

1 Like

I used

mamba install pytorch torchvision pytorch-cuda=12.2 -c pytorch -c nvidia

which yielded no complaints. Afterward, mamba list torch returned:

# packages in environment at /home/ec2-user/anaconda3/envs/LLM:
#
# Name                    Version                   Build  Channel
pytorch                   2.2.2              py3.12_cpu_0    pytorch
pytorch-cuda              12.2                 h5ef38aa_0    https://aws-ml-conda.s3.us-west-2.amazonaws.com
pytorch-mutex             1.0                         cpu    pytorch
torchaudio                2.2.2                 py312_cpu    pytorch
torchvision               0.17.2                py312_cpu    pytorch

But here is the weird thing:

python -c 'import torch; print(torch.version.cuda); print(torch.__version__); print(torch.cuda.is_available())'

returns

None
2.2.2
False

The environment variables CONDA_HOME, PATH and LD_LIBRARY_PATH all point to /usr/local/cuda-12.2 or the lib64 subdirectory, as does the logical link /usr/local/cuda. Furthermore, I was able to compile a trivial CUDA hello program, suggesting that the installation is good.

Any suggestions?

You’ve installed the CPU-only binary and would need to install the one with CUDA support.

Thank you for catching this @ptrblck! In the end, the usual installs failed to handle CUDA Version 12.2, falling through to CPU type. After downgrading to CUDA Version 12.1 the GPU versions did install, and all is well.

Good to hear it’s working!
Yes, pytorch-cuda=12.2 is not built by us and we recommend sticking to the official install instructions.

1 Like

I’ve also run into this problem yesterday on main and it was solved by installing gcc 12.3 on conda, I’m on cuda 12.4 and wsl.

Similar to OP i also had 3.4.30 in my system path but it didn’t work.