libTorch tensor to OpenCV and back c++

Perhaps someone can help me with this. I’ve written a small network and I’d like to pass the output of this tensor to OpenCV to do some further image processing and pass the output of that back to a tensor which then is used in calculating the loss. Converting the tensors to cv::Mats and vice versa is not the problem, however, the weights don’t seem to update in the network and therefore doesn’t train. The gist of it is like so:

torch::Tensor prediction = net->forward(img_tensor);

cv::Mat prediction_vectors(cv::Size(512, 512), CV_32FC3, prediction.data_ptr());

// some image processing magic 
cv::Mat someOutput = someImageProcessingMagic;  

torch::Tensor newTensor = torch::from_blob(someOutput.data, { 1, 3, 512, 512 });

torch::Tensor loss = torch::mse_loss(newTensor, target);

Any help is appreciated.

Try to create a new copy by detaching the original tensors and then convert them to cv2, do your processing and then add them to original tensors. See if it works.

Thanks for your reply Kushaj,

I’ve attempted your suggestion but it doesn’t solve the issue, although I’m not sure what you meant by “adding the result to the original tensors”.

torch::Tensor prediction = net->forward(img_tensor);

// create a new instance of the tensor by detaching the original tensor 
torch::Tensor prediction_cpy = prediction.detach();

cv::Mat prediction_vectors(cv::Size(512, 512), CV_32FC3, prediction_cpy.data_ptr());

// some image processing magic 
cv::Mat someOutput = someImageProcessingMagic;  

// do you mean add?
prediction.add(torch::from_blob(someOutput.data, { 1, 3, 512, 512 }));

torch::Tensor loss = torch::mse_loss(prediction, target);

Thanks again for your help.

I meant inorder to do your opencv operations, you can detach the values from the original graph, do all your stuff with opencv and in the end, you can add those opencv results to your original graph if that is what you want (for example to compute some loss or something else).

I see. Yes, that’s not going to work. The output of the network is a tensor of motion vectors, which then I want to use to warp an image which is then used as a new tensor of a predicted (warped) image and calculate the loss, corresponding to the desired target image. Hope it makes sense. If not, I’m happy to give you a more detailed used case.

Try this link. If not I will try to come up with something and have you also looked at grid_sample in pytorch.

Thanks - I’ll take a look at it and let you know. Have you got a link to grid_sample as well?

You will have to look at docs for that.

Thanks, I realised after I replied that this was a torch call.

So, I had some time to play around with this and the code makes perfect sense. However, during debugging I noticed something strange. The output values from the “grid” tensor don’t match up to the new opencv Mat pointer from the tensor. What makes it even more mysterious is that if the grid tensor is either a 1 or 3 channel tensor then the values match. Here is the code:

torch::Tensor _x = torch::arange(10).view({ 1, -1 }).expand({ 10, -1 });
torch::Tensor _y = torch::arange(10).view({ -1, 1 }).expand({ -1, 10 });
torch::Tensor grid = torch::stack({ _x,_y }, 0);
grid = grid.unsqueeze(0).expand({ 1, -1, -1, -1});

cv::Mat gridMat(cv::Size(10, 10), CV_32FC2, grid.data_ptr());

for (int j = 0; j < 10; j++) {
        for (int i = 0; i < 10; i++) {
                 //these bellow should match
	         std::cout << gridMat.at<cv::Vec2f>(j, i)[0] << ", " << grid[0][0][j][i] << "\n";
	}
}