This time, the cv::Mat values seems normal, but result still not right.
what I want , is convert a tensor in multi dimension: [19, 32, 46] into a vector, length is 19, and every member in vetor is cv::Mat with size of 32,46.
Now, dims is right, the issue is the value converted into cv::Mat is always not right, it is not the same value in tensor.
std::vector<cv::Mat> heatMaps(heatMapsTensor.size(0));
for (size_t i = 0; i < heatMaps.size(); i++) {
torch::Tensor one_heat_map = heatMapsTensor[i];
cv::Mat one_mat(heatMapsTensor.size(1), heatMapsTensor.size(2), CV_32FC1);
std::memcpy(one_mat.data, one_heat_map.data<float>(), sizeof(float) * one_heat_map.numel());
heatMaps[i] = one_mat;
}
I just saw this usage tensor_a.data<float>() which seems a clue so I changed to this but still not right.
I am trying another thread:
get the tensor pointer say float* p, then copy it’s data to cv::Mat. However, how should I get the point of a tensor ? using tensor_a.data_ptr()? But that is (void*) not what I need float*
It works! I think the process" out_tensor = out_tensor.to(torch::kCPU) " is crucial , I am right??
only we must put the output tensor from GPU to CPU, then we can convert the data to Mat?!
Here,I got two probabilities map for a semantic segmentation task.One is the background class and the other is the object class.Now I want to convert from torch to cv2,simply I just use this one line code,
Mat seg_map(256, 256, CV_32FC1, probs[1].data_ptr());
However, I found there are some difference from model inference in Python.Is there a better way to copy tensor data to mat?
Thank you for your attention.