Still the solutions on converting Torch::tensor to cv::Mat won't work

Hi everyone,

I have been following the topics on converting Torch::tensor to cv::Mat but they don’t yield the exact result I am expecting.

So I have trained a network with a number of batches (16) and now I want to evaluate it with single image. The network is supposed to output another image and I think it has been trained well for that. The image I’m trying to feed in is a gray-scale 256x256:

cv::Mat img = cv::imread("\path\to\img", cv::IMREAD_UNCHANGED);
    cv::normalize(img, img, 0.0, 1.0, cv::NORM_MINMAX, CV_32F);

    torch::Tensor myTensor = torch::from_blob(img.data, {img.rows, img.cols, 1}, torch::kFloat32);
    myTensor = myTensor.permute({2, 0, 1});
    myTensor = myTensor.view({1, 1, 256, 256});
    myTensor = myTensor.to(device);

And then I feed it to my network,get the output tensor, and try to save the image as CV_8UC1 as follows:

torch::Tensor out_tensor = MyCNN->forward(myTensor);
    out_tensor = out_tensor.permute({ 0, 2, 3, 1 });
    out_tensor = out_tensor.squeeze(0).detach();
    out_tensor = out_tensor.detach().cpu().mul(255).clamp(0,255).to(torch::kByte);

    cv::Mat resultImg(256, 256, CV_8UC1);
    std::memcpy((void*)resultImg.data, out_tensor.data_ptr(), sizeof(torch::kByte)*out_tensor.numel());

    imwrite("resultImg.tiff", resultImg);

It does produce and save an image but it’s nowhere close to what I expect, looks more like a white woven fabric!

One thing I should mention is that the lines for reading in an image from HDD is exactly the same as the way I wrote the custom dataloader for training except the following line:

myTensor = myTensor.view({1, 1, 256, 256});

which is obviously is not used for training since we train with batches.

Any input or help will be greatly appreciated.

Thanks!

out_tensor is not guaranteed to be contiguous. I’d try using ravel() to get a contiguous tensor for the call to memcpy().