Numerical differences between numpy and pytorch?

Hello all,

When computing mean and std on numpy or pytorch tensors, they yield different results. How come ?

rng = np.random.RandomState(1)
dataset1 = rng.uniform(low=-0.01, high=0.01, size=(1000, 20))
dataset2 = torch.from_numpy(data_numpy)
print(dataset1.mean(), dataset1.std())
print(dataset2.mean(), dataset2.std())

returns
2.5537095782416174e-05 0.005769608507668796
tensor(-3.14198780749532207562380037302319e-06, dtype=torch.float64) tensor(0.00576444647196127767790896356814, dtype=torch.float64)

1 Like

Hi,

This comes from the limited precision of floating point numbers.
Each operation on float32 has a precision of ~1e-6. And accumulating a large number of them can lead to big differences.
Similar for float64 where it starts around 1e-12 and goes up from there.

Hello AlbanD,
Thank you for your quick response and time.
Maybe I do not understand fully. Shouldn’t pytorch and numpy operations yield same values when initialized with same floating point precision assuming the rounding rules are equal?

Hi,

Unfortunately no :confused:
The reason is that floating point operations are not associative: (a + b) + c != a + (b + c). And so any difference in the order where things are accumulated will lead to such discrepancies.
For these ops, both pytorch and numpy use multithreading. But because they do this in a slightly different way, you see these differences.

Note that some operations are not even deterministic (usually on the GPU) and running it twice in a row won’t give you bit-perfect same results. See the note in the doc about determinism if you want to learn more.

4 Likes