Batch-wise slice tensors

Is it possible to remove these two for ?

import torch

a = torch.rand(10, 10)
bz = 4
x_start_index = torch.LongTensor([2, 4, 1, 3])
x_end_index = torch.LongTensor([4, 6, 2, 6])

y_start_index = torch.LongTensor([1, 6, 5, 3])
y_end_index = torch.LongTensor([3, 9, 7, 4])

v_total = []
for i in range(bz):
    # also calculate the sum
    v_tmp = torch.sum(a[x_start_index[i]: x_end_index[i], y_start_index[i]: y_end_index[i]])
    v_total.append(v_tmp)

v_total = torch.stack(v_total)

for i in range(bz):
    # zero out the boxes
    a[x_start_index[i]: x_end_index[i], y_start_index[i]: y_end_index[i]] = 0

Considering that there are multiple similar unanswered questions, I don’t think there’s a way to do the slicing without loops.

1 Like

Suddenly, I realized that such operation could not be implemented in an easy way.
Similar questions you listed can be regarded as simplified version of my question, as they only need slicing the same size tensor with left-up coordinates given. The sliced tensors share the same size, however, in my case it is not true. I think the Pytorch team would rather perfer the case of slicing the same size instead of my case in the future.

1 Like

Only this batch slice operation takes me more than 50% time of my model, have you found the solution?