Vectorized sum different from looped sum

Hi, I am using torch 1.7.1 and I noticed that vectorized sums are different from sums in a loop if the indices are repeated. For example:

import torch

indices = torch.LongTensor([0,1,2,1])
values = torch.FloatTensor([1,1,2,2])
result = torch.FloatTensor([0,0,0])

looped_result = torch.zeros_like(result)

for i in range(indices.shape[0]):
    looped_result[indices[i]] += values[i]

result[indices] += values

print('result',result)
print('looped result', looped_result)

results in:

>> result tensor([1., 2., 2.])
>> looped result tensor([1., 3., 2.])

As you can see the looped variable has the correct sums while the vectorized one doesn’t. Is it possible to avoid the loop and still get the correct result?

Since you are using duplicated indices, the behavior is UB and you should use index_put_ instead with accumulate=True:

result.index_put_((indices,),  values, accumulate=True)

Thank you very much! it works. I did not know of this method