This might be relatively basic, and I’m sorry if there’s already a solution to this I could not find.
I’m trying to get rid of a for loop when gathering data in one tensor, and adding the sum of those values to another tensor (some of the rows in that last one will be updated multiple times, others may not be at all).
If you look at the toy example here:
a = torch.zeros(2,2)
b = a.clone()
c = torch.tensor(
[[1,1],
[-1,-1]]
)
# only the last operation seems to have an effect
# try with one or the other to see the result change
# indc = [0,0,1]
indc = [1,0,0]
# sadly this does not work
a[indc] += c[indc]
print(a)
# works as epxected
for i,r in zip(indc, c[indc]):
b[i] += r
print(b)
Is there a slick way of doing this that anyone knows of?
tensor.index_put_(..., accumulate=True) should work:
a = torch.zeros(2,2)
b = a.clone()
c = torch.tensor(
[[1,1],
[-1,-1]]
).float()
# only the last operation seems to have an effect
# try with one or the other to see the result change
# indc = [0,0,1]
indc = [1,0,0]
# sadly this does not work
a[indc] += c[indc]
print(a)
# tensor([[ 1., 1.],
# [-1., -1.]])
# works as epxected
for i,r in zip(indc, c[indc]):
b[i] += r
print(b)
# tensor([[ 2., 2.],
# [-1., -1.]])
# also works
a = torch.zeros(2,2)
a.index_put_((torch.tensor(indc),), c[indc], accumulate=True)
print(a)
# tensor([[ 2., 2.],
# [-1., -1.]])