How can I mask on arrays instead of values?

Hi all,

I am looking to mask a 2D tensor with a 1D tensor. The masked_select and masked_scatter etc. are intended to mask on values and return a 1D tensor while I need to retain the 2D format, so this will not work.
I have used indexing to make it work:

test = torch.Tensor(parameter_space)
print(test.shape)
mask = torch.ones(len(test), dtype=torch.bool)
mask[1] = False
print(mask)
print(test[mask, :].shape)

This yields the following (correct) output, removing the second array:

torch.Size([7520, 8])
tensor([ True, False,  True,  ...,  True,  True,  True])
torch.Size([7519, 8])

However, I was wondering if this is the correct / most efficient way to do this?

Your approach certainly looks correct. Regarding efficiency: are you seeing a bottleneck in this operation? If so, there might be faster approaches (unsure yet), but I would generally not optimize code which won’t be visible in the entire model training later.