I met a precision problem in pytorch when add an EPS to a float number. Does anyone know why the following example give different (comparasion, minus) result.
a = torch.ones(1, dtype=torch.float) * 2304
b = a + 0.0001
a, b, a==b, a - b
(tensor([2304.]), tensor([2304.]), tensor([1], dtype=torch.uint8), tensor([0.]))
a = torch.ones(1, dtype=torch.float) * 100
b = a + 0.0001
a, b, a==b, a - b
(tensor([100.]), tensor([100.0000991821]), tensor([0], dtype=torch.uint8), tensor([-9.9182128906e-05]))
a = torch.ones(1, dtype=torch.float) * 2047
a + 0.0001
>> tensor([2047.0001])
It seems that 2048 is a boundary, I think it must related to the floating point expression in binary format, it clips the significand part that causes the precision loss (if I am wrong, correct me please). But I cannot get the accurate idea. Does anyone have accurate explanations?