Torch::cuda::is_available() always returns false

Good morning,

I’m developing a c++ application using libtorch2.7.1+cu128. I cannot send the model to the GPU because the method is_available() form torch::cuda always returns false.

The dll’s I’m using are the following ones:

asmjit.dll
cublasLt64_12.dll
cublas64_12.dll
cudart64_12.dll
cudnn64_9.dll
cufft64_11.dll
cupti64_2025.1.0.dll
curand64_10.dll
cusolver64_11.dll
cusparse64_12.dll
c10.dll
c10_cuda.dll
fbgemm.dll
libiomp5md.dll
nvrtc64_120_0.dll
torch.dll
torch_cpu.dll
torch_cuda.dll

and the lib files are the following ones:

asmjit.lib"
fbgemm.lib"
c10.lib"
c10_cuda.lib"
torch.lib"
torch_cpu.lib"
torch_cuda.lib"

I’m taking this files from the debug and release official folder I’ve downloaded from official website. I’ve done a cmake file named findTorch.cmake where I load the .lib files and copy de dll’s to the binary distribution folder.

Can someone help me to solve this?

Thanks! Kind regards!

Hello, I encounter the same question when I use libtorch1.9.1 in visual studio 2019(windows).
I ensure:
(1) I have added all the relevant include and lib in the project
(2) The version of the libtroch is 1.9.1
(3) The CUDA environment is ok and matched, which is cu111
(4) The pytorch cuda is avaibale but the libtorch is not
(5) The environment varibles of CUDA or Libtorch have added to the system path
When I check the output of the visual studio 2019, I found the torch_cpu.dll and c10.dll is loaded, but no torch_cuda.dll. So, I guess that is the reason.
But I can not slove the problem