The theoretical floating point precision is 2**32, which is ~4.3e9. I noted a weird behavior with torch.eq operation, seems that the limit is around at 2.9e-8, see the example bellow.
Is there any documentation about those values? which epsilon values are recommended to use to avoid numerical errors when using fp32 or fp16?

Your misconception here is that the full 32 bits of a 32-bit single-precision
floating-point number are used for precision. But 8 of those bits are used
for the number’s (binary) exponent, which is what gives floating-point
numbers their large dynamic range.

That leaves you then with (roughly speaking) only 24 bits of precision,
which translates, as you have noted, to about eight decimal digits of
precision.

(Beware, also, of pytorch’s dreaded TensorFloat-32 that automatically
and silently gives you distinctly less precision for some tensor
operations on some recent fancy gpus.)