# Compute the sum of first k[i] elements of a tensor

Suppose I have a 2-d tensor:

``````t
tensor([[1.0000, 3.0000, 4.0000, 2.0000, 1.0000],
[2.0000, 3.0000, 3.5000, 4.0000, 3.5000]])
``````

and a LongTensor:

``````c
tensor([2, 3])
``````

I want to get the first `c[i]` elements’ sum of the i-th row of `t`, i.e. [4.0, 8.5] in this case. How to calculate this without iteration?

You have a 2-row tensor and your index tensor is `[2, 3]`, so I think what you are asking is a `t[:, 2].sum()` and `t[:, 3].sum()`, correct me if I am wrong, if that is so then how are you getting `[4.0, 8.5]`?

Sorry, but I am trying to get `t[0, :2].sum()` and `t[1, :3].sum()` , which is 4.0 and 8.5 respectively. Thank you for your reply.

Now I have a solution:

``````t.cumsum(1)[torch.arange(len(c)), c-1]
tensor([4.0000, 8.5000])
``````

But it does not work if `c[i]` is 0 for any i. And it does a lot of redundant calculation in the `cumsum()`. So maybe there is a better solution?

Have you checked `torch.narrow`, you can get a slice of your tensor then apply `sum(axis=1)` but the elements in tensor c should have the same value else it won’t return a result. Other than that looks like you really need to use loops.

Thanks, but it seems `torch.narrow` cannot slice different length of each row.

Inspired by a friend Zijing Ou, here is an easy solution by constructing a mask:

``````mask = torch.linspace(0, t.shape[1]-1, t.shape[1]).view(1, t.shape[1]).repeat(t.shape[0], 1) < c.view(t.shape[0], 1)
``````

would get the desired results.

1 Like

Thanks for sharing. I’ll note this one.

Hello Youzunzhi!

I don’t know of any way to avoid a loop without summing over the
full tensor, such as by using `.cumsum()` (or Zijing’s masked-sum
suggestion).

After forming the sums, you have to index into them to get the
desired partial sum. I don’t know of a “clean” way of doing this;
the best I could come up with is to use `.take()`.

To accommodate the case in which an element of `c` is `0` (so that
you sum over zero elements of `t`, getting `0.0`), we “initialize” the
partial sums with `0.0` by prepending a zero slice to `t`.

Here is my approach:

``````import torch
torch.__version__

t = torch.FloatTensor([[1.0000, 3.0000, 4.0000, 2.0000, 1.0000],
[2.0000, 3.0000, 3.5000, 4.0000, 3.5000]])
c = torch.LongTensor([2, 3])

t0 = torch.cat ((torch.zeros (t.shape[0], 1), t), 1)      # initialize partial sums with 0
tsum = t0.cumsum (dim = 1)                                # calculate partial sums
cp = c + t0.shape[1] * torch.arange (c.shape[0]).long()   # 1-d indices so we can use take()
sc = tsum.take (cp)                                       # get specified partial sums

print ('sc =\n', sc)                                      # print result
``````

And here is the output:

``````>>> import torch
>>> torch.__version__
'0.3.0b0+591e73e'
>>>
>>> t = torch.FloatTensor([[1.0000, 3.0000, 4.0000, 2.0000, 1.0000],
...                        [2.0000, 3.0000, 3.5000, 4.0000, 3.5000]])
>>> c = torch.LongTensor([2, 3])
>>>
>>> t0 = torch.cat ((torch.zeros (t.shape[0], 1), t), 1)      # initialize partial sums with 0
>>> tsum = t0.cumsum (dim = 1)                                # calculate partial sums
>>> cp = c + t0.shape[1] * torch.arange (c.shape[0]).long()   # 1-d indices so we can use take()
>>> sc = tsum.take (cp)                                       # get specified partial sums
>>>
>>> print ('sc =\n', sc)                                      # print result
sc =

4.0000
8.5000
[torch.FloatTensor of size 2]
``````

Best.

K. Frank

1 Like