Unexpected behavior with torch.max and torch.mean

In the code snippet below, a and b should be exactly equal.

import torch
from torch.autograd import Variable

for i in range(1000):
    a = Variable(torch.rand(1,1))
    eps = 0.1
    b = (1-eps)*torch.max(a) + eps*torch.mean(a)
    print b, a
    print b - a
    assert torch.equal(a,b)

However, in the first iteration I get an assertion error saying they are not equal:

b Variable containing:
 0.9811
[torch.FloatTensor of size 1]

a Variable containing:
 0.9811
[torch.FloatTensor of size 1x1]

b - a Variable containing:
1.00000e-08 *
 -5.9605
[torch.FloatTensor of size 1x1]

Traceback (most recent call last):
  File "test.py", line 11, in <module>
    assert torch.equal(a,b)
AssertionError

Would anyone know why this is the case?

Here is another assertion that fails.

assert 1 - 0.8 - 0.2 == 0

The reason is that floating point operations have limited precision. So you can’t expect a and b to be exactly equal, but you can expect their difference to be really small.

1 Like