USE_CUDNN set to zero after upgrade to libtorch 2.0

Hey I just upgraded to libtorch 2.0, but it says that it compiles with USE_CUDNN = 0, and will compile without cudnn support. I did not change anything on my codebase including CMAKEFile. I am wondering anyone else experiencing this?

Could you describe your workflow a bit more?
Are you trying to build libtorch or an application based on libtorch?
Would export USE_CUDNN=1 work?

I directly downloaded the Libtorch from Pytorch front page: .
Then libtorch is integrated into my C++ codebase via CMake
Cuda can be found, but the configuration stage of building tells that build using USE_CUDNN=0, and
exporting USE_CUDNN=1 does not work

Same error message after upgrading to libtorch2.0:
USE_CUDNN is set to 0. Compiling without cuDNN support
although it does not seem to change anything in practice since cuda is available, and libtorch does run on the GPU.
In my case, I still have cuda 11.5 and cudnn8.4 so it may be the cause if the error message

I encountered the same issue after upgrading to pytorch 2.0

setting CAFFE2_USE_CUDNN=1 solved it for me.

The relevant cmake code where pytorch searchs for cudnn is located here:



Welcome to the community. Thanks for the tip. It works!!!