PyTorch cannot find libcudnn

Hello,
I am trying to use PyTorch with Poetry, on Fedora Linux. I have CUDA drivers installed on my system. I do not want to install CUDNN, or NCCL system-wide.
From what I understand, PyTorch should work out of the box, on a system with CUDA, whether or not the CUDNN and NCCL are installed, as it ships them as a dependency.
When I try to run code that uses PyTorch library, and requires Nvidia libraries, I get this or similar error:

/home/mble/.cache/pypoetry/virtualenvs/remove-background-H96ZRE0z-py3.12/lib64/python3.12/site-packages/torch/nn/modules/conv.py:456: UserWarning: Attempt to open cnn_infer failed: handle=0 error: libcudnn_cnn_infer.so.8: cannot open shared object file: No such file or directory (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:81.)
  return F.conv2d(input, weight, bias, self.stride,

Even though, the required libraries are in the virtual environment:

$ find /home/mble/.cache/pypoetry/virtualenvs/remove-background-H96ZRE0z-py3.12/ -type f -name libcudnn\*                                                                                                                                                                                                                
/home/mble/.cache/pypoetry/virtualenvs/remove-background-H96ZRE0z-py3.12/lib/python3.12/site-packages/nvidia/cudnn/lib/libcudnn.so.8
/home/mble/.cache/pypoetry/virtualenvs/remove-background-H96ZRE0z-py3.12/lib/python3.12/site-packages/nvidia/cudnn/lib/libcudnn_adv_infer.so.8
/home/mble/.cache/pypoetry/virtualenvs/remove-background-H96ZRE0z-py3.12/lib/python3.12/site-packages/nvidia/cudnn/lib/libcudnn_adv_train.so.8
/home/mble/.cache/pypoetry/virtualenvs/remove-background-H96ZRE0z-py3.12/lib/python3.12/site-packages/nvidia/cudnn/lib/libcudnn_cnn_infer.so.8
/home/mble/.cache/pypoetry/virtualenvs/remove-background-H96ZRE0z-py3.12/lib/python3.12/site-packages/nvidia/cudnn/lib/libcudnn_cnn_train.so.8
/home/mble/.cache/pypoetry/virtualenvs/remove-background-H96ZRE0z-py3.12/lib/python3.12/site-packages/nvidia/cudnn/lib/libcudnn_ops_infer.so.8
/home/mble/.cache/pypoetry/virtualenvs/remove-background-H96ZRE0z-py3.12/lib/python3.12/site-packages/nvidia/cudnn/lib/libcudnn_ops_train.so.8

In this example, the PyTorch is in version 2.3.1+cu121:

Python 3.12.3 (main, Apr 17 2024, 00:00:00) [GCC 14.0.1 20240411 (Red Hat 14.0.1-0)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print('torch version', torch.__version__)
torch version 2.3.1+cu121

Best regards,
Maciej Błędkowski

The issue seems to be specific to poetry and also described here.

Almost, but no locally installed CUDA toolkit is needed. Only a properly installed NVIDIA driver is necessary. However, I don’t know how exactly poetry works and why it’s unable to use the shipped dependencies.

Thank you very much for your response.
Do you know what could cause this behavior?

No, unfortunately not, as I’m not familiar with poetry.
You could use another virtual env and install PyTorch via pip or conda as a workaround.