Hi all,
I’m encountering an unexpected difference in the value of two tensors before and after addition.

I have received the following tensor from a Linear layer:

``````x = torch.tensor([ 0.2163, -0.1675, -0.0950, -0.1593, -0.0628, -0.3286,  0.0187, -0.2372,
-0.1530,  0.2945,  0.0602,  0.1602, -0.0701,  0.0301, -0.0220, -0.1735,
-0.1145,  0.1231,  0.0635, -0.0197,  0.0776, -0.1025, -0.0822,  0.0577,
-0.0608,  0.0161,  0.2321, -0.1484,  0.0033,  0.1681, -0.0177, -0.1523,
-0.0126, -0.1447, -0.0075,  0.0568, -0.2305, -0.1494, -0.3213, -0.2324,
0.1063, -0.0171, -0.0016, -0.0474,  0.3222,  0.1174,  0.1325, -0.3385,
-0.0653, -0.1163, -0.1045, -0.1349,  0.0684, -0.1131,  0.3090, -0.1836,
-0.0581,  0.0181,  0.0861,  0.0921, -0.0187,  0.1681,  0.0273,  0.0134],
device='cuda:0')
``````

and I then run the following few lines:

``````outer = torch.ger(x, x)[0,0]
A = torch.tensor([1e6], device='cuda:0')
sigma = 1e-6

print(A)
> tensor([1000000.], device='cuda:0')
print((1/sigma)*outer)
> tensor([46785.6914], device='cuda:0')

A = A + (1/sigma)*outer
print(A)
> tensor([1046785.6875], device='cuda:0')
``````

Which shows quite a large difference in terms of the values in the decimal places. I’m not sure whether there is something I’m missing, so would be great to have this clarified. All tensors here are of dtype torch.float32.

Thanks

This small error is expected due to the limited floating point precision.
As explained in this Wikipedia article for single precision floats, the precision limitations of decimal values are defined as:

``````Decimals between 2**n and 2**(n+1): fixed interval 2**(n-23)
``````

For your output `n` would be `19`, since:

``````x = torch.tensor(1000000.)
y = torch.tensor(46785.6914)
z = x + y

print(2**19 < z)
> tensor(True)
print(z < 2**20)
> tensor(True)
``````

which results in an eps of `2**(19-23) = 0.0625`.
The representable numbers are thus:

``````1046785.6875 (your output) < 1046785.6914 (theoretical output) < 1046785.7500 (next valid number, = 1046785.6875 + 0.0625)
``````

You can verify the rounding via:

``````for i in torch.linspace(0, 2**(19-23), 10):
print(z + i)
> tensor(1046785.6875)
tensor(1046785.6875)
tensor(1046785.6875)
tensor(1046785.6875)
tensor(1046785.6875)
tensor(1046785.7500)
tensor(1046785.7500)
tensor(1046785.7500)
tensor(1046785.7500)
tensor(1046785.7500)
``````
2 Likes