Where is CuDNN installed?

I installed PyTorch along with CUDA toolkit and (presumably) CuDNN. The install appears to work well: torch.backends.cudnn.version() returns something reasonable, and CUDA tensors are well-behaved.

Now I’m trying to install some other DL packages, and I’d like to set my LD_LIBRARY_PATH so that those packages can use the same CuDNN as PyTorch. I’m trying to avoid installing conflicting copies of CuDNN and breaking things. However, I’ve found that PyTorch’s CUDA install (at /usr/local/cuda-10.0) doesn’t actually include CuDNN in its lib64 directory – or anywhere else, it appears. Calling find /usr/local/cuda-10.0 -name "*cudnn*" returns nothing at all.

So I’m wondering, where does PyTorch install CuDNN?

Does PyTorch install cuda?

CuDNN is another nvidia library and I’d say you should install it your self. It’s typically copied in cuda folder but if you want a system with several pairs cuda/cudnn you may save it in a different one.

Anyway im not an expert so consider this as a weak insight

The PyTorch binaries will not install a complete CUDA and cudnn library on your system.
I’m not familiar with the build process of PyTorch, but e.g. if you’ve used conda to install it, you’ll find the torchlib.so in ~/anaconda3/envs/YOUR_ENV_NAME/lib/python.3.7/site-packages/torch.
Running ldd on it gives:

	libcudart.so.10.0 => ...
        ...
	libcusparse.so.10.0 => ...
	libcurand.so.10.0 => ...
	libcufft.so.10.0 => ...

I’m not sure, if cudnn is linked in libcudart.so or separately.

That being said, if you would like to build other libraries with CUDA (using nvcc), you should install CUDA (and cudnn) directly on your system as @JuanFMontesinos said.
If possible, try to use the same CUDA version which is used in your PyTorch install, as this might avoid some incompatibility issues.