Hi,
I am trying to run vision c++ api with torch. I have added the following in in my CMakeLists.txt
.
find_package(TorchVision REQUIRED)
target_link_libraries(my-target PUBLIC TorchVision::TorchVision)
But when call cmake I get:
ubuntu@pc:~/pytorch_cpp/build$ cmake -DCMAKE_PREFIX_PATH="/home/ubuntu/pytorch_cpp/build/libtorch;/home/ubuntu/pytorch_cpp/build/vision" ..
-- Caffe2: CUDA detected: 10.0
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 10.0
-- Found cuDNN: v7.5.1 (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so)
-- Autodetected CUDA architecture(s): 5.2 5.2
-- Added CUDA NVCC flags for: -gencode;arch=compute_52,code=sm_52
CMake Error at build/vision/build/TorchVisionConfig.cmake:50 (include):
include could not find load file:
/home/ubuntu/pytorch_cpp/build/vision/build/TorchVisionTargets.cmake
Call Stack (most recent call first):
CMakeLists.txt:10 (find_package)
CMake Error at build/vision/build/TorchVisionConfig.cmake:59 (set_target_properties):
set_target_properties Can not find target to add properties to:
TorchVision::TorchVision
Call Stack (most recent call first):
CMakeLists.txt:10 (find_package)
-- Configuring incomplete, errors occurred!
See also "/home/ubuntu/pytorch_cpp/build/CMakeFiles/CMakeOutput.log".
See also "/home/ubuntu/pytorch_cpp/build/CMakeFiles/CMakeError.log".
I checked /home/ubuntu/pytorch_cpp/build/vision/build/
for TorchVisionTargets.cmake
and its not present. I called cmake
in the directory, but it was not created. I’m not sure how to go around this issue as I’m pretty new to LibTorch. I’m trying to run the RCNN-FPN model from detectron2 on torchscript. If I run it without vision I get the error:
Unknown builtin op: torchvision::nms. Could not find any similar ops to torchvision::nms. This op may not exist or may not be currently supported in TorchScript.
I need some guidance on how to set up vision in c++. There are my files:
CMakeLists.txt
cmake_minimum_required(VERSION 3.10 FATAL_ERROR)
project(example)
#set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD 14)
find_package(Torch REQUIRED)
add_executable(main main.cpp)
target_link_libraries(main ${TORCH_LIBRARIES})
main.cpp
#include <torch/torch.h>
#include <torch/script.h>
#include <torchvision/vision.h>
int main()
{
torch::Device device(torch::kCPU);
torch::Tensor tensor = torch::zeros({2, 2});
std::cout << tensor << std::endl;
if (torch::cuda::is_available()) {
std::cout << "CUDA is available! " << std::endl;
device = torch::kCUDA;
}
torch::Tensor test_gpu_tensor = tensor.to(device);
std::cout << test_gpu_tensor << std::endl;
torch::jit::script::Module module;
module = torch::jit::load("/home/ubuntu/pytorch_cpp/script_model.pt");
module.to(device);
return 0;
}
Directory Structure:
├── CMakeLists.txt
├── build
│ ├── CMakeCache.txt
│ ├── CMakeFiles [14 entries exceeds filelimit, not opening dir]
│ ├── Makefile
│ ├── cmake_install.cmake
│ ├── detect_cuda_compute_capabilities.cpp
│ ├── detect_cuda_version.cc
│ ├── libtorch
│ │ ├── bin
│ │ ├── build-hash
│ │ ├── build-version
│ │ ├── include
│ │ ├── lib
│ │ └── share
│ ├── main
│ └── vision
└── build
├── main.cpp
└── script_model.pt
References: