Different results in c++inference when compared to python inference

Hi,
I used the below code for c++ implementation and the results are completely different with the python results.
Please provide suggestions.

#include <torch/script.h> // One-stop header.

#include
#include
#include <opencv2/opencv.hpp>

#include

std::string image_path = “/home/Desktop/libtorch-cxx11-abi-shared-with-deps-1.9.0+cpu/libtorch/example-app-FAN/crop_FRAME0020.jpg”;

int main(int argc, const char* argv[]) {
if (argc != 2) {
std::cerr << “usage: example-app \n”;
return -1;
}

torch::jit::script::Module module;
try {
// Deserialize the ScriptModule from a file using torch::jit::load().
module = torch::jit::load(argv[1]);
}
catch (const c10::Error& e) {
std::cerr << “error loading the model\n”;
return -1;
}

std::cout << “ok\n”;
// Create a vector of inputs.
std::vectortorch::jit::IValue inputs;
inputs.push_back(torch::ones({1, 3, 256, 256}));

//Input image
auto image = cv::imread(image_path, cv::ImreadModes::IMREAD_COLOR);
cv::cvtColor(image, image, cv::COLOR_BGR2RGB);

// Convert to Tensor
torch::Tensor tensor_image = torch::from_blob(image.data,{ image.rows, image.cols,3 }, torch::kByte); /
tensor_image = tensor_image.permute({ 2,0,1 });
tensor_image = tensor_image.toType(torch::kFloat);
tensor_image = tensor_image.div(255);
tensor_image = tensor_image.unsqueeze(0);

// Execute the model and turn its output into a tensor.

auto output = module.forward({tensor_image}) ;
std::cout<<output<<"\n";

}
The model inference in python returns a lis of values.After using this code,I am getting o/p like “Columns 10 to 18 0.2104 0.1923 0.2044” etc.I dont have any idea why it’s like that.Hi,
I used the below code for c++ implementation and the results are completely different with the python results.
Please provide suggestions.

#include <torch/script.h> // One-stop header.

#include
#include
#include <opencv2/opencv.hpp>

#include

std::string image_path = “/home/Desktop/libtorch-cxx11-abi-shared-with-deps-1.9.0+cpu/libtorch/example-app-FAN/crop_FRAME0020.jpg”;

int main(int argc, const char* argv[]) {
if (argc != 2) {
std::cerr << “usage: example-app \n”;
return -1;
}

torch::jit::script::Module module;
try {
// Deserialize the ScriptModule from a file using torch::jit::load().
module = torch::jit::load(argv[1]);
}
catch (const c10::Error& e) {
std::cerr << “error loading the model\n”;
return -1;
}

std::cout << “ok\n”;
// Create a vector of inputs.
std::vectortorch::jit::IValue inputs;
inputs.push_back(torch::ones({1, 3, 256, 256}));

//Input image
auto image = cv::imread(image_path, cv::ImreadModes::IMREAD_COLOR);
cv::cvtColor(image, image, cv::COLOR_BGR2RGB);

// Convert to Tensor
torch::Tensor tensor_image = torch::from_blob(image.data,{ image.rows, image.cols,3 }, torch::kByte); /
tensor_image = tensor_image.permute({ 2,0,1 });
tensor_image = tensor_image.toType(torch::kFloat);
tensor_image = tensor_image.div(255);
tensor_image = tensor_image.unsqueeze(0);

// Execute the model and turn its output into a tensor.

auto output = module.forward({tensor_image}) ;
std::cout<<output<<"\n";

}
The model inference in python returns a lis of values.After using this code,I am getting o/p like “Columns 10 to 18 0.2104 0.1923 0.2044” etc.I dont have any idea why it’s like that.