So, I have a row tensor like this
values = torch.tensors([0.2, -0.1, 0.9, -0.4, 0.3, 0.1, -0.5, -0.7])
And also, I have a tensor that is the cumsum of the length of sub Tensor
cum_seqlens = torch.tensors([0,2,2,4], dtype=torch.long)
I want to sum the subset of values
based on the cum_seqlens
like this:
# because the cumsum is 2, 2, 4, the tensor is split into [0.2, -0.1], [0.9, -0.4], and [0.3, 0.1, -0.5, -0.7]
torch.Tensor([0.2+(-0.1), 0.9+(-0.4), 0.3+0.1+(-0.5)+(-0.7)])
I could just split the tensor, but the result is a tuple of sub tensors. I want to keep the tensors intact because I need the grad of process.
I already search for the solution and found this similar problem here. But, the accepted answer is user encoding_indices
while I don’t have it. I also have row tensor while theirs is column tensor.