Torch.add floating point addition

Hi Cbd!

The core problem is that 98310.1 is not exactly representable by
the floating numbers that pytorch (and your cpu and gpu) use under
the hood for arithmetic. (Just to be clear, nor is 0.1.)

Please consider carefully the following:

>>> import torch
>>> torch.__version__
'1.9.0'
>>> torch.set_printoptions (precision=20)
>>> bs = torch.tensor ([0.1], dtype = torch.float)
>>> bs
tensor([0.10000000149011611938])
>>> 1 + bs
tensor([1.10000002384185791016])
>>> (1 + bs) - 1
tensor([0.10000002384185791016])
>>> 98310 + bs
tensor([98310.10156250000000000000])
>>> (98310 + bs) - 98310
tensor([0.10156250000000000000])
>>> bd = torch.tensor ([0.1], dtype = torch.double)
>>> bd
tensor([0.10000000000000000555], dtype=torch.float64)
>>> 1 + bd
tensor([1.10000000000000008882], dtype=torch.float64)
>>> (1 + bd) - 1
tensor([0.10000000000000008882], dtype=torch.float64)
>>> 98310 + bd
tensor([98310.10000000000582076609], dtype=torch.float64)
>>> (98310 + bd) - 98310
tensor([0.10000000000582076609], dtype=torch.float64)

The Wikipedia article on round-off error gives a good introduction to
what is going on.

This is all directly relevant to the related question you raised in this
other thread of yours:

Best.

K. Frank

1 Like