This time, the cv::Mat values seems normal, but result still not right.
what I want , is convert a tensor in multi dimension: [19, 32, 46] into a vector, length is 19, and every member in vetor is cv::Mat with size of 32,46.
Now, dims is right, the issue is the value converted into cv::Mat is always not right, it is not the same value in tensor.
std::vector<cv::Mat> heatMaps(heatMapsTensor.size(0));
for (size_t i = 0; i < heatMaps.size(); i++) {
torch::Tensor one_heat_map = heatMapsTensor[i];
cv::Mat one_mat(heatMapsTensor.size(1), heatMapsTensor.size(2), CV_32FC1);
std::memcpy(one_mat.data, one_heat_map.data<float>(), sizeof(float) * one_heat_map.numel());
heatMaps[i] = one_mat;
}
I just saw this usage tensor_a.data<float>() which seems a clue so I changed to this but still not right.
I am trying another thread:
get the tensor pointer say float* p, then copy it’s data to cv::Mat. However, how should I get the point of a tensor ? using tensor_a.data_ptr()? But that is (void*) not what I need float*
It works! I think the process" out_tensor = out_tensor.to(torch::kCPU) " is crucial , I am right??
only we must put the output tensor from GPU to CPU, then we can convert the data to Mat?!
Here,I got two probabilities map for a semantic segmentation task.One is the background class and the other is the object class.Now I want to convert from torch to cv2,simply I just use this one line code,
Mat seg_map(256, 256, CV_32FC1, probs[1].data_ptr());
However, I found there are some difference from model inference in Python.Is there a better way to copy tensor data to mat?
Thank you for your attention.
I would offer a couple of notes here in order for people to perhaps get a bigger picture. This code is great and simply works (very difficult to find working code for libtorch in general PERIOD).
A usual workflow would include PyTorch (where most people would develop the model and train it). After the model has been trained, you need to dump it to the CPU (that is, the RAM, not VRAM) and export it using tracing, which is part of TorchScript. This process is basically serializing your model so that it can be loaded both in Python as well as in C++.
A very important aspect is how you train your model. I was just playing around with SRCNN (among the oldest super resolution models out there) and it dawned on me that I have trained it with single channel images and not 3. So accordingly when you are loading an image with OpenCV you will have to consider that too.