Pytorch replication in c++

I want to replicate the below code for inference of a neural network model in C++:

transform = transforms.Compose([
    transforms.Resize((224, 224)),  # Resize images to 224x224
    transforms.ToTensor(),
    transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))  # Normalize the image tensors
])

The C++ code that I am using:


    // convert from BGR to RGB
    cv::cvtColor(image_org, image, cv::COLOR_GRAY2RGB);

    // resize
    cv::resize(image, image, cv::Size(sizeX, sizeY));

    // reshape to 1D
    image = image.reshape(1, 1);

    // Convert the image to floating-point type and  Normalize the image
    cv::Mat floatImage;
    image.convertTo(floatImage, CV_32F, 1.0 / 255.0);

    // Define mean and standard deviation values
    cv::Scalar mean(0.5, 0.5, 0.5);
    cv::Scalar stdDev(0.5, 0.5, 0.5);

    // Subtract mean and divide by standard deviation***
    floatImage -= mean;
    floatImage /= stdDev;

But the accuracy differs around 5% when I execute the code with pytorch and C++.

If you think the preprocessing step is causing differences in your workflow, you could process a specific image in both codes, and compare their output to check the error.

Yes, I have comapared and it is not the same

This could be interesting: The dangers behind image resizing