Floating point precision with torch operations

The theoretical floating point precision is 2**32, which is ~4.3e9. I noted a weird behavior with torch.eq operation, seems that the limit is around at 2.9e-8, see the example bellow.
Is there any documentation about those values? which epsilon values are recommended to use to avoid numerical errors when using fp32 or fp16?

torch.tensor([1.]).eq(1.)
Out[35]: tensor([True])
torch.tensor([1. - 3e-8]).eq(1.)
Out[36]: tensor([False])
torch.tensor([1. - 2.9e-8]).eq(1.)
Out[37]: tensor([True])

Hi lkdci!

Your misconception here is that the full 32 bits of a 32-bit single-precision
floating-point number are used for precision. But 8 of those bits are used
for the number’s (binary) exponent, which is what gives floating-point
numbers their large dynamic range.

That leaves you then with (roughly speaking) only 24 bits of precision,
which translates, as you have noted, to about eight decimal digits of
precision.

Wikipedia’s single-precision-floating-point entry gives a good overview
of this.

(Beware, also, of pytorch’s dreaded TensorFloat-32 that automatically
and silently gives you distinctly less precision for some tensor
operations on some recent fancy gpus.)

Best.

K. Frank