Float Overflow?

Some additional context:
You are creating these “integer” tensors using the default type of torch.float32.
As seen in the Wikipedia article to IEEE FP32, integer numbers >2**26 should be rounded to a multiple of 8.

@Tony-Y is in fact creating these numbers as torch.float64, which can be checked via:

a = torch.tensor(100000016.0, dtype=float)
print(a.type())
> torch.DoubleTensor

which has a higher limit for these rounding errors.

1 Like