KFrank
(K. Frank)
January 25, 2023, 6:36pm
2
Hi Martin!
martinferianc:
Is there a way, where it would be possible to use a vectorizing operation if we would like to fill a portion of the tensor, given different indices for each sample in a batch B
?
# Repalce the for loop with a vectorized implementation
for i in range(B):
mask[i, :, bby1[i]:bby2[i], bbx1[i]:bbx2[i]] = 1.0
In a reply to a similar question, I showed how to create a “zero-mask” from
a batch of bounding boxes. You should be able to do the same, but convert
the “zero-mask” to a “one-mask” with logical_not()
.
Here is the relevant post:
Hi Hwang!
Yes, you can use your bounding boxes, B, to build a mask tensor that
you then multiply onto img to zero out the values in the bounding boxes:
>>> import torch
>>> print (torch.__version__)
1.13.0
>>>
>>> _ = torch.manual_seed (2022)
>>>
>>> nBatch = 256
>>> nChannels = 3
>>> h = 32
>>> w = h
>>>
>>> img = torch.randn (nBatch, nChannels, h, w)
>>> B = torch.rand (nBatch, 4) * h
>>> B = B.type (torch.int32)
>>>
>>> imgB = img.clone()
>>>
>>> # loop version
>>> for i in range (nBatch…
Best.
K. Frank
1 Like