Summing a tensor according to a different tensor provided

torch.tensor sums up the tensor along any given dimension. For example, if we have a tensor of size [1000, 300], torch.sum(T, axis=0) will return a tensor of shape [300]. I don’t want to sum the whole tensor. I have another tensor [200, 200, 600]. So, what I want is, tensor of shape [3, 300] in which, the first row is sum of first 200 rows of tensor T, second row is sum of next 200 rows and the last row is sum of remaining 600 rows of tensor T. Is it possible to achieve this? It is guaranteed that sum of values in different tensor (200 + 200 + 600) is always equal to number of rows in main tensor T. Thanks in advance :slight_smile:

Edit : I am aware about the splitting in pytorch, however, it will require me to use a for loop. Turns out that number of elements in different tensor are really large, and in every iteration, using for loop is too much costly. I’m looking for an efficient way of doing this operation.

I had one idea to write a separate function to do this operation. This function will take the boundaries and the main tensor as input and return the required sum. However, I’m not sure how do I parallelise this function. This function basically does the same thing on different parts of the input tensor. Does pytorch provide any functionality to parallelise this operation? Parallelising this is really efficient and cool since the operations are independent of each other, are mutually exclusive and exhaustive. If this is not possible in current scope of pytorch, are there any other external tools/libraries that can do this?

1 Like