Difference between torch.sum and addition operator

I have noticed that the results of torch.sum and the addition operator are not always equal.

For example:

import torch

def f():
    a = torch.rand(100).view(10,10)
    s1 = torch.sum(a, dim=0)
    s2 = torch.zeros(10)
    for i in range(10):
        s2 += a[i]
    assert (s1 == s2).all()

for i in range(1000):
    f()

Running the above code results in assertion failure. If I check the difference between s1 and s2, they are generally of the order of 1e-7

Similar numpy code however seems to not have this issue.

import numpy as np

def f():
    a = np.random.rand(100).astype(np.float32).reshape((10,10))
    s1 = np.sum(a, axis=0)
    s2 = np.zeros(10, dtype=np.float32)
    for i in range(10):
        s2 += a[i]
    assert (s1 == s2).all()

for i in range(1000):
    f()

I have run this several times and haven’t had any assertion errors so far.

Is this behavior simply because of floating point precision or is there something else that is causing this issue?

Comparing 2 float numbers with the equality operator is a bad practice in any coding language.
If you use a proper way of comparing floats such us torch.allclose — PyTorch 1.12 documentation or numpy.allclose — NumPy v1.23 Manual you will find both are correct given the precision of fp32.