A tiny difference between outputs of PyTorch and LibTorch

Hello,

there are different outputs between my Python code and my C++ code. I just simply loaded the ResNet101 model, and loaded a static input with torch.ones following these instructions: Loading a TorchScript Model in C++ — PyTorch Tutorials 1.9.0+cu102 documentation.
Using
Pytorch 1.7.1
Libtorch 1.7.1
centos 7.9

Python Code

import torch
import torchvision

torch.manual_seed(0)

model = torchvision.models.resnet101()
in_data = torch.ones(1, 3, 224, 224)

model.eval()
output = model(in_data)

print(output[0, :5].detach().numpy())

traced_script_module = torch.jit.script(model)
traced_script_module.save(“traced_resnet_model.pt”)

C++ code
#include <torch/script.h> // One-stop header.
#include
#include
#include

int main(int argc, const char* argv[]) {
if (argc != 2) {
std::cerr << “usage: example \n”;
return -1;
}

torch::jit::script::Module module;
try {
// Deserialize the ScriptModule from a file using torch::jit::load().
module = torch::jit::load(argv[1]);
}
catch (const c10::Error& e) {
std::cerr << “error loading the model\n”;
return -1;
}
std::cout << “ok\n”;

// Create a vector of inputs.
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::ones({1, 3, 224, 224}));

// Execute the model and turn its output into a tensor.
at::Tensor output = module.forward(inputs).toTensor();

std::cout << output.slice(1, 0, 5) << ‘\n’;
}

and the python output is [-9897.268 -1432.8728 -2543.7222 11986.332 3412.1372], but the c++ output is [-9897.2686 -1432.8741 -2543.7183 11986.331 3412.1426]. Some values precision is less than 6 decimal places.
Can someone explain that?
@ ptrblck

Hello,

Differences when printed maybe. Look at the real values or use the same output precision.

Pascal