Summing non zero tensor values across a dimension

Let’s say,

I have a tensor `A = [1.3, 0.0, 0.6, 0.7, 0.8]`
I want to find the sum of the tensor like `[1.3, 0.0, 2.1]` and then choose the maximum which is `2.1`. I want to see the indices as well which were used to sum these values. In this case it will be `2, 3, 4`

The sums of your tensor `A` will thus be calculated from consecutive values, which are split by a zero value?
Will negative values count as a split or should they be summed?

There won’t be any negative values. If you are saying to use `torch.split()` then I feel that we will have to pass the list of size of every chunk. Is there any way to calculate the size of chunk other than naive for loop?

Unless there is more structure to your data and we know where the 0 indexes are I don’t see a way without using a sequential for loop.

assuming this degenerate case:

`[0, 1, 0, 2, 3, 0, 4, 5, 6, 7, 0, 8, 9, 10, 11, 12, 13, 14, 15, 0]`

IIRC expected is `[0, 1, 0, 5, 0, 22, 0, 92, 0]` and an additional list of list of indexes `[[0], [1], [2], [3, 4], [5], [6, 7, 8, 9], ....]`

If you want to generate those in a single pass without temporaries, a for loop is best.

How will we be able to do split across batches? So if my tensor changes to 2D for example.
`A = [[1.3, 0.0, 0.6, 0.7], [1.1,1.2, 0.0, 0.0]]`
My length of split will be then `[[1, 1, 2], [2, 1, 1]]`. `torch.split()` only takes the list and not list of list so is it a new function that I will have to make?

I checked your feature request as well on github but the split size is changing across a dimension for me.

`torch.split` currently only splits along one dimension, so that you can’t provide different lengths for the same dim.

Do you have any constrains regarding your tensor `A`, i.e. will the first values always be a valid one or could there be a zero sometimes?

There can be zero sometimes for the first values.