Float precision of torch

Hello, Dear Forumers,

I met a precision problem in pytorch when add an EPS to a float number. Does anyone know why the following example give different (comparasion, minus) result.

a = torch.ones(1, dtype=torch.float) * 2304
b = a + 0.0001
a, b, a==b, a - b
(tensor([2304.]), tensor([2304.]), tensor([1], dtype=torch.uint8), tensor([0.]))

a = torch.ones(1, dtype=torch.float) * 100
b = a + 0.0001
a, b, a==b, a - b
(tensor([100.]), tensor([100.0000991821]), tensor([0], dtype=torch.uint8), tensor([-9.9182128906e-05]))

Hello,

I found that:

a = torch.ones(1, dtype=torch.float) * 2047
a + 0.0001
>> tensor([2047.0001])

It seems that 2048 is a boundary, I think it must related to the floating point expression in binary format, it clips the significand part that causes the precision loss (if I am wrong, correct me please).
But I cannot get the accurate idea. Does anyone have accurate explanations?

Hi, thanks for your reply. Numpy seems not to have such an issue.

a = np.ones(1, dtype=np.float32)
a.dtype
dtype(‘float32’)
a = 2048
a + 0.0001
2048.0001
a + 0.00001
2048.00001

0.0001 isn’t a multiple of a negative power of two, so it cannot be represented exactly. There is a detailed discussion of precision in the Wikipedia article.

Best regards

Thomas