Convert torch::tensor to cv::Mat

I want to convert th result torch::tensor to cv::Mat using the follow code, but the resultImg is wrong.I have checked the tensor result.It is right

torch::Tensor out_tensor = module->forward(inputs).toTensor(); assert(out_tensor.device().type() == torch::kCUDA); out_tensor=out_tensor.to(torch::kCPU); out_tensor=out_tensor.squeeze().detach(); out_tensor=out_tensor.permute({1,2,0}); out_tensor=out_tensor.mul(255).clamp(0,255).to(torch::kInt);
cv::Mat resultImg(512, 512,CV_8UC3,(void*)out_tensor.data_ptr());

Of course, I can get the correct image result like this:

for(int y=0;y<512;++y)
for(int x=0;x<512;++x)
for(int c=0;c<3;++c)
{
int val=(*(out_tensor[y][x][c]).data<int>());
testImg.at< cv::Vec3b>(y,x)[c]=val;
}
But this way is too slow.I want to know the efficient method ,any advise is appreciated

Hi @JeeLee

you should be able to use a std::memcpy like so

#include <torch/torch.h>
#include <opencv2/core/core.hpp>

int main()
{
   cv::Mat cv_mat = cv::Mat::eye(3,3,CV_32F);
   torch::Tensor tensor = torch::zeros({3, 3}, torch::kF32);

   std::memcpy(tensor.data_ptr(), cv_mat.data, sizeof(float)*tensor.numel());

   std::cout << cv_mat << std::endl;
   std::cout << tensor << std::endl;

   return 0;
}

and your CMakeLists.txt

cmake_minimum_required(VERSION 3.11 FATAL_ERROR)

project(torch_to_cv)

find_package(Torch REQUIRED)
find_package(OpenCV REQUIRED)

include_directories(${TORCH_INCLUDE_DIRS})
include_directories(${OpenCV_INCLUDE_DIRS})

add_executable(main main.cpp)
target_link_libraries(main ${TORCH_LIBRARIES} ${OpenCV_LIBRARIES})

of course this also means that you’ll lose track of the gradients. And of course if you are dealing with higher dimensional tensors, as you already pointed out, you’ll need to be aware that OpenCV stores images HxWxC, while for Pytorch they are stored as CxHxW. Using permute() for that is perfectly fine.

4 Likes

Thanks for your reply. Your advise is very useful to me. Now I can get the correct result image using the follow code:

torch::Tensor out_tensor = module->forward(inputs).toTensor();
assert(out_tensor.device().type() == torch::kCUDA);
out_tensor=out_tensor.squeeze().detach().permute({1,2,0});
out_tensor=out_tensor.mul(255).clamp(0,255).to(torch::kU8);
out_tensor=out_tensor.to(torch::kCPU);
cv::Mat resultImg(512, 512,CV_8UC3);
std::memcpy((void*)resultImg.data,out_tensor.data_ptr(),sizeof(torch::kU8)*out_tensor.numel());

4 Likes

Oh, it’s helpful.
How to do inference on GPU? I mean move model and input to the GPU, can I see the code?