I would like to convert image (array) to tensor for Deep learning model inference.

How do I convert to libtorch based C++ from the below code?

img_transforms = transforms.Compose([transforms.ToTensor(),

transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) ?

P.S

I found the below example in online

Tensor CVMatToTensor(cv::Mat mat)

{

std::cout << “converting cvmat to tensor\n”;

cv::cvtColor(mat, mat, cv::COLOR_BGR2RGB);

cv::Mat matFloat;

mat.convertTo(matFloat, CV_32F, 1.0 / 255);

auto size = matFloat.size();

auto nChannels = matFloat.channels();

auto tensor = torch::from_blob(matFloat.data, { 1, size.height, size.width, nChannels });

return tensor.permute({ 0, 3, 1, 2 });

}

but not sure how to implement the 'transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) to C++ using libtorch (or whatevert methods).

Thank You.