Is there any alternative to in PyTorch?

There is a function in NumPy called which allows to add values to elements accessed by multi-index with repeated elements, such that for each repeated element all its corresponding values will be considered in the summation. E.g. see an example:

A = np.zeros(5), [1, 1, 2], 1)


array([0., 2., 1., 0., 0.])

Right now I am in a very large need for the same thing in PyTorch (I can’t avoid repeated indices in my task), as the plain summation has different behavior:

A = torch.zeros(5)
A[[1, 1, 2]] += 1


tensor([ 0.,  1.,  1.,  0.,  0.])

Is there any way to simulate behavior of by PyTorch operations?
Thank you!

1 Like

Looks like I have already found a solution myself:

A = torch.zeros(5)
A.index_add_(0, torch.LongTensor([1, 1, 2]), torch.FloatTensor([1, 1, 1]))

It seems that this can also be used for multi-dimensional tensors, if they are flattened beforehand. It would be very handy to have such a function for multi-dimensional tensors also (if there is no one already).



Is there any scalable solution to this problem? I am using multi-dimensional tensors I would like to sum elements using indices in another tensor, and some indices appear more than once. In NumPy does the job.

Check this StackOverflow answer