How to use system cuda/cudnn

I have a machine with cuda/cudnn compatible rocm device.

$ nvcc --version
HIPHSA: Author SUGON
HIP version: 5.4.23453
Cuda compilation tools, release 11.8, V11.8.89
clang version 15.0.0 (http://10.15.3.7/dcutoolkit/driverruntime/llvm-project.git 1be90618e508074abc746ab4963d7ad92710d6c5)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /public/software/compiler/dtk-23.10.1/llvm/bin

I then install pytorch 2.2 with cuda 11.8 by:

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

But when I import torch, it can’t find cuda device:

$ python
Python 3.11.8 (main, Feb 26 2024, 21:39:34) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.device_count()
0

I think the problem is pytorch use the cuda/cudnn runtime lib of its own. But I want it to use system cuda.

I have set CUDA_HOME and LD_LIBRARY_PATH. But it seems not work.

Which device are you using?
The binaries ship with their own CUDA runtime and your locally installed CUDA toolkit will be used if you build PyTorch from source.