Is there any alternative to numpy.add.at in PyTorch?

There is a function in NumPy called np.add.at which allows to add values to elements accessed by multi-index with repeated elements, such that for each repeated element all its corresponding values will be considered in the summation. E.g. see an example:

A = np.zeros(5)
np.add.at(A, [1, 1, 2], 1)
A

produces:

array([0., 2., 1., 0., 0.])

Right now I am in a very large need for the same thing in PyTorch (I can’t avoid repeated indices in my task), as the plain summation has different behavior:

A = torch.zeros(5)
A[[1, 1, 2]] += 1
A

produces:

tensor([ 0.,  1.,  1.,  0.,  0.])

Is there any way to simulate behavior of np.add.at by PyTorch operations?
Thank you!

1 Like

Looks like I have already found a solution myself:

A = torch.zeros(5)
A.index_add_(0, torch.LongTensor([1, 1, 2]), torch.FloatTensor([1, 1, 1]))
A

It seems that this can also be used for multi-dimensional tensors, if they are flattened beforehand. It would be very handy to have such a function for multi-dimensional tensors also (if there is no one already).

2 Likes

Hello,

Is there any scalable solution to this problem? I am using multi-dimensional tensors I would like to sum elements using indices in another tensor, and some indices appear more than once. In NumPy np.add.at does the job.
Thanks

Check this StackOverflow answer