This solution worked for me on a fresh installation of Ubuntu 22.04.2 LTS in WSL. Thank you!
Having a similar issue. I’m trying to get Faster Whisper to run off a docker build.
I’m trying to use the docker image:
pytorch/pytorch:2.1.0-cuda11.8-cudnn8-runtime
Unfortunately, getting this libcudnn_ops_infer.so.8 issue as well. Anyone know how I might add the necessary additional libraries? I can’t use the official Nvidia docker image it seems (was too large for my smaller system to handle).
Try to remove the system-wide installed cuDNN from your LD_LIBRARY_PATH
, which should allow PyTorch to load its own dependency.
I’m building this in a docker container then running it on a VM in google cloud, so don’t believe it’s already installed on the system
What exactly are you trying to build in a runtime container?
I’m using this runtime container as the base FROM image and building a docker container to run my python script on a GCP VM. It’s working for running regular Whisper, but not Faster Whisper.
I updated my docker file like so (I got the LD_LIBRARY_PATH by testing on my VM by running python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'
)
FROM pytorch/pytorch:2.1.0-cuda11.8-cudnn8-runtime
RUN pip install nvidia-cublas-cu11 nvidia-cudnn-cu11
RUN pip install -r requirements.txt
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/conda/lib/python3.10/site-packages/nvidia/cublas/lib:/opt/conda/lib/python3.10/site-packages/nvidia/cudnn/lib
Got a bit further, but got this error:
Could not load library libcudnn_cnn_infer.so.8. Error: /opt/conda/lib/python3.10/site-packages/nvidia/cudnn/lib/libcudnn_cnn_infer.so.8: undefined symbol: _ZN11nvrtcHelper4loadEb, version libcudnn_ops_infer.so.8
Seems like it’s close, just missing one thing on compatibility.
Thanks, this worked for me as well.
Thank you very much. I had the same problem and that indeed fixed it