Unable to link cuda

Hi, I’m going through the PyTorch C++ Frontend tutorial .

Here is my code to check if cuda available:

#include <torch/torch.h>
#include <iostream>

int main() {
   torch::DeviceType device_type;
   if (torch::cuda::is_available()) {
     std::cout << "CUDA available! Training on GPU" << std::endl;
     device_type = torch::kCUDA;
   } else {
     std::cout << "Training on CPU" << std::endl;
     device_type = torch::kCPU;
   torch::Device device(device_type);

  torch::Tensor tensor = torch::rand({2, 3}).to(device);
  std::cout << tensor << std::endl;

My CMakeList.txt:

cmake_minimum_required(VERSION 3.0 FATAL_ERROR)

find_package(Torch REQUIRED)
find_package(CUDA REQUIRED)

add_executable(gpudebug gpudebug.cpp)
target_link_libraries(gpudebug ${TORCH_LIBRARIES} ${CUDA_LIBRARIES})
set_property(TARGET gpudebug PROPERTY CXX_STANDARD 11)

It found cuda and libtorch and passed make. But ‘ldd gpudebug’ doesn’t show any cuda libraries.

The output of running the code:

Training on CPU
0.6203 0.3150 0.9453
0.3561 0.1440 0.0994
[ Variable[CPUFloatType]{2,3} ]

Seems that ‘torch::cuda::is_available()’ is false.
Any ideas how to fix this?

I think I found the fix.

The LibTorch link provided by the tutorial is

wget https://download.pytorch.org/libtorch/nightly/cpu/libtorch-shared-with-deps-latest.zip

You should change it to

wget https://download.pytorch.org/libtorch/nightly/cu90/libtorch-shared-with-deps-latest.zip