Multidimensional filling in a N dimensional tensor

Hi,

Is there a way, where it would be possible to use a vectorizing operation if we would like to fill a portion of the tensor, given different indices for each sample in a batch B? For example:

import torch
x = torch.randn(3, 3, 224, 224)
x_shape = x.shape
B, C, H, W = x_shape
cut_width = int(W * 0.5)
cut_height = int(H * 0.5)

cut_x = torch.randint(0, W, (B,))
cut_y = torch.randint(0, H, (B,))

bbx1 = torch.clamp(cut_x - cut_width // 2, 0, W)
bby1 = torch.clamp(cut_y - cut_height // 2, 0, H)
bbx2 = torch.clamp(cut_x + cut_width // 2, 0, W)
bby2 = torch.clamp(cut_y + cut_height // 2, 0, H)

mask = torch.zeros(x_shape)
         
# Repalce the for loop with a vectorized implementation
for i in range(B):
    mask[i, :, bby1[i]:bby2[i], bbx1[i]:bbx2[i]] = 1.0

I did come across e.g. erase — Torchvision main documentation but it does not work over batched inputs.

Hi Martin!

In a reply to a similar question, I showed how to create a “zero-mask” from
a batch of bounding boxes. You should be able to do the same, but convert
the “zero-mask” to a “one-mask” with logical_not().

Here is the relevant post:

Best.

K. Frank

1 Like

Thank you very much for your post if I “hack” it like this, this works:

y_mask = torch.logical_or(torch.arange(H).unsqueeze(
            0) < bby1[:, None], torch.arange(H).unsqueeze(0) >= bby2[:, None])
 x_mask = torch.logical_or(torch.arange(W).unsqueeze(
            0) < bbx1[:, None], torch.arange(W).unsqueeze(0) >= bbx2[:, None])
 ones_mask = ~torch.logical_or(y_mask.unsqueeze(2), x_mask.unsqueeze(1)).unsqueeze(1)
 mask = torch.zeros(x_shape)
 mask+=ones_mask

Is there potentially a more elegant solution? @ptrblck Thank you again!

I think your approach sticks to @KFrank’s suggestion and looks alright.
You could remove the last step and directly transform ones_mask to a FloatTensor as it seems to be your target:

ones_mask = ~torch.logical_or(y_mask.unsqueeze(2), x_mask.unsqueeze(1)).unsqueeze(1)
mask = ones_mask.float()
1 Like