I am trying to convert a Torch-tensor to OpenCV-Mat int32
torch::Tensor A = torch::randint(0,30,{4,7},torch::TensorOptions().dtype(torch::kInt32));
cv::Mat cv_A(4,7, CV_32SC1);
std::memcpy((void*)cv_A.data, A.data_ptr(),sizeof(torch::kInt32)*A.numel());
When I print out the results, it shows
# Torch Tensor
16 29 8 18 3 17 13
4 17 0 26 22 21 26
28 29 28 11 25 19 28
11 25 13 11 23 20 28
[ CPUIntType{4,7} ]
and
#OpenCV Mat
[16, 29, 8, 18, 3, 17, 13;
0, 1, 4, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0;
0, 0, 0, 0, 0, 0, 0]
The results will not be same in-case of int-16
.
torch::Tensor A = torch::randint(0,30,{4,7},torch::TensorOptions().dtype(torch::kInt16));
cv::Mat cv_A(4,7, CV_16SC1);
std::memcpy((void*)cv_A.data, A.data_ptr(),sizeof(torch::kInt16)*A.numel());
When printed the results for A
and cv_A
for the above code, I get
#torch Tensor
6 6 10 3 26 29 3
2 24 21 10 27 3 8
21 8 8 27 19 18 22
16 2 5 3 28 11 16
[ CPUShortType{4,7} ]
#opencv Mat
[6, 6, 10, 3, 26, 29, 3;
2, 24, 21, 10, 27, 3, 8;
0, 0, 0, 0, 0, 0, -26544;
28670, 21872, 0, 0, 0, 0, 0]
However, I get the same results when uint8
is used.
torch::Tensor A = torch::randint(0,30,{4,7},torch::TensorOptions().dtype(torch::kUInt8));
cv::Mat cv_A(4,7, CV_8UC1);
std::memcpy((void*)cv_A.data, A.data_ptr(),sizeof(torch::kUInt8)*A.numel());
#torch Tensor
20 21 22 5 10 21 1
7 5 22 25 23 16 2
8 3 15 20 17 16 27
19 29 25 8 8 26 16
[ CPUByteType{4,7} ]
# OpenCV Mat
[ 20, 21, 22, 5, 10, 21, 1;
7, 5, 22, 25, 23, 16, 2;
8, 3, 15, 20, 17, 16, 27;
19, 29, 25, 8, 8, 26, 16]
I am using the latest version of Libtorch ie 1.5.1
Can I know what is the issue here ?
Is there something wrong in the datatypes being used ?
P.S I have referred most of the issues posted in the forum and couldn’t come up with a solution for this.