Torch.add floating point addition

In below code, “torch.add” gives some higher floating point value. Output is tensor([98310.1016])
after roundoff tensor([98310.1016])

I want exact “98310.1” value. I try to round off but its not working. Any idea welcome.

import torch
a=torch.FloatTensor([98310])
b=0.1
arr=torch.add(a,b)
print(arr)

n_digits = 2
rounded = (arr * 10**n_digits).round() / (10**n_digits)
print("after roundoff" ,rounded)

Hi Cbd!

The core problem is that 98310.1 is not exactly representable by
the floating numbers that pytorch (and your cpu and gpu) use under
the hood for arithmetic. (Just to be clear, nor is 0.1.)

Please consider carefully the following:

>>> import torch
>>> torch.__version__
'1.9.0'
>>> torch.set_printoptions (precision=20)
>>> bs = torch.tensor ([0.1], dtype = torch.float)
>>> bs
tensor([0.10000000149011611938])
>>> 1 + bs
tensor([1.10000002384185791016])
>>> (1 + bs) - 1
tensor([0.10000002384185791016])
>>> 98310 + bs
tensor([98310.10156250000000000000])
>>> (98310 + bs) - 98310
tensor([0.10156250000000000000])
>>> bd = torch.tensor ([0.1], dtype = torch.double)
>>> bd
tensor([0.10000000000000000555], dtype=torch.float64)
>>> 1 + bd
tensor([1.10000000000000008882], dtype=torch.float64)
>>> (1 + bd) - 1
tensor([0.10000000000000008882], dtype=torch.float64)
>>> 98310 + bd
tensor([98310.10000000000582076609], dtype=torch.float64)
>>> (98310 + bd) - 98310
tensor([0.10000000000582076609], dtype=torch.float64)

The Wikipedia article on round-off error gives a good introduction to
what is going on.

This is all directly relevant to the related question you raised in this
other thread of yours:

Best.

K. Frank

1 Like

OK. I understood. Thanks.