Why there's a tiny difference between outputs of PyTorch and LibTorch?

This is my PyTorch result of the first 10 value.
tensor([-1.3709, -0.8062, 1.9198, 0.4937, -2.9625, 1.2369, -0.3699, -0.1852, 4.9301, 4.0436], grad_fn=<SliceBackward>)

This is my LibTorch(C++) result of the fisrt 10 value.
-1.3711 -0.8062 1.9193 0.4921 -2.9617 1.2358 -0.3710 -0.1846 4.9292 4.0416 [ Variable[CUDAFloatType]{1,10} ]

I test between two realization of my codes. And the only difference is the LANGUAGE I USE. The direct input of the model is the same!

Can someone explain that?

1 Like

it might just be the printing, C++ might be printing at a different precision than Python-side.

So you mean that the result value should be absolutely same?
But as you say, the difference is too large for printing ???

I’m saying it’s probably the same according to float precision limits (so same to the order of 1e-5) – but printing formatting in python and C++ might differ – check that further.

I’m suffering the same problem, and I think it is not just because of float precision limits…
I’ve checked almost whole middle-result tensor value (size is about [20, 512, 100, 100]), and I’ve found that only some part of tensor is slightly different from python result (and it is still too big to be caused by float precision…), and most part of tensor has a huge difference…