How can I normalize input in C++?

I am running a simple model in C++. The input tensors are created as:

std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::from_blob(X_vec.data(), sizes));

How can I normalize the input data? I found the following function:

torch::data::transforms::Normalize<>(0.5, 0.5)

But I do not know how to apply it.

Hi @James_Trueb

these transforms are ment to be applied to datasets. For example have a look at the MNIST example. You will find this line,

auto test_dataset = torch::data::datasets::MNIST(
                        kDataRoot, torch::data::datasets::MNIST::Mode::kTest)
                        .map(torch::data::transforms::Normalize<>(0.1307, 0.3081))
                        .map(torch::data::transforms::Stack<>());

What it internally does is simple. It subtracts the mean and divides by the standard deviation. So what you may be looking for is something like

for (auto& t : inputs) {
	t = t.toTensor().sub(0.5).div(0.5);
}

Can you tell me more about X_vec? What datatype has it?

Thank you for the answer. I am not working with a dataset, therefore I do not need to use transforms(), like you have stated. The input to the network is a one channel 16bit signed image. I changed my code to normalize the image like this:


input_img.convertTo(input_img, CV_32FC1);
input_img = (input_img - mean) / stddev;
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::from_blob(input_img.data, { 1, 1, input_img.rows, input_img.cols }));

I guess it does not matter if I normalize before or after placing the image into the tensor, please tell me if I’m wrong.

yes that should work just fine I guess. So input_img is a cv::Mat?

Perfect. Yes, input_img is a cv::Mat!