An efficient way to slice tensors with array indices?

How can I get an unaligned tensor from a large one efficiently?
The pseudo-code is shown as follow (it can not work in pytorch):

x = torch.rand((3,1,6,6), requires_grad=True) # [batch_size, channel, w, h]
left_index = torch.randint(0,4,(3,))
right_index = left_index + 2
bottom_index = torch.randint(0,4,(3,))
top_index = bottom_index + 2
new_x = x[:,:,left_index:right_index, bottom_index:top_index]

that means, for the i-th image, we want to get a small tensor using slice
(i, :, left_index[i]: right_index[i], bottom_index[i]: top_index[i])
Of course, if I write a for-loop code enumerating through all the images and stacking the results, I can obtain the wanted small tensor.
However, I think “for-loop+stacking” algorithm takes time, and I want a more efficient method. Please give me some advice. Thank you!

This post from @KFrank might be useful. :slight_smile:

Thank you, @ptrblck. I read this post. In my view, the solution (inclusion-exclusion version) provided in this post works because the sum of the mask_region is wanted. However, in my question, I want to obtain the raw values in the mask region. It seems that the inclusion-exclusion version is not very suitable in this case. I am not very sure… Can you give me some tips in using inclusion-exclusion method?

Ah OK, sorry for the misunderstanding.
I’m not sure, if it would be possible to get the desired regions without a reduction operation, and I think you would have to use a loop.
Since the slicing might return different output shapes, you wouldn’t be able to create one output tensor (nested tensors would be needed, which is still a work in progress).

Thank you for your help~

Hi is there any update to this?