Pytorch Data Value Inconsistencies

I found this wired inconsistencies in pytorch:

a = torch.ones(1)
b = a+0.1
b[0]    # b[0] here is  1.100000023841858
b.numpy()[0] # 1.1
c = 10000
d = c+0.1 
d[0]   # d[0] is 1000.0999755859375
d.numpy()[0]  # 1000.1

This error between x[0] and it should be gets larger and larger as x goes up. And this seems to make the gradient check function get_numerical_jacobian() especially unstable when dealing with large inputs

Can anyone explain why is this the case?

It’s a matter of print format

In [1]: import numpy as np

In [2]: import torch

In [3]: a = torch.ones(1)

In [4]: b = a+0.1

In [5]: b[0] == b.numpy()[0]
Out[5]: True

In [6]: type(b[0])
Out[6]: float

In [7]: type(b.numpy()[0])
Out[7]: numpy.float32

In [8]: np.set_printoptions(precision=10)


In [9]: b.numpy()
Out[9]: array([ 1.1000000238], dtype=float32)

see?

1 Like