# Indexing a tensor with another tensor vs an array

What is the exact reason for treating tensors and (numpy) arrays differently for indexing of tensors? For example:

indices = torch.tensor([[0, 1], [0, 2]])

t = torch.arange(1,28).reshape(3,3,3)
# tensor([[[ 1,  2,  3],
#          [ 4,  5,  6],
#          [ 7,  8,  9]],

#         [[10, 11, 12],
#          [13, 14, 15],
#          [16, 17, 18]],

#         [[19, 20, 21],
#          [22, 23, 24],
#          [25, 26, 27]]])

>>> a[indices.numpy()]
tensor([[ 1,  2,  3],
[16, 17, 18]])

>>> a[indices]
tensor([[[[ 1,  2,  3],
[ 4,  5,  6],
[ 7,  8,  9]],

[[10, 11, 12],
[13, 14, 15],
[16, 17, 18]]],

[[[ 1,  2,  3],
[ 4,  5,  6],
[ 7,  8,  9]],

[[19, 20, 21],
[22, 23, 24],
[25, 26, 27]]]])

I understand how it works, but is there a particular reason for this? Isn’t having this differentiation a little bit confusing and potentially can lead to errors?

Moreover, using bool arrays doesn’t work:

>>> b = t < 15

tensor([[[ True,  True,  True],
[ True,  True,  True],
[ True,  True,  True]],

[[ True,  True,  True],
[ True,  True, False],
[False, False, False]],

[[False, False, False],
[False, False, False],
[False, False, False]]])

>>> t[b.numpy()]
IndexError: Dimension out of range (expected to be in range of [-3, 2], but got 3)

But it works just fine if we use bool tensor without converting it to numpy array:

>>> t[b]
tensor([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14])

Why is that happening? Both the bool tensor b and its numpy version b.numpy() are exactly the same in shape and values.