Does PyTorch follows NumPy's indexing?

Example 1

I was playing around with indexing a 3D tensor with a boolean mask and I noticed that x[mask] != x[mask.nonzero()] (but see the Note), in contrast with NumPy where these two operations are equivalent x[mask] == x[mask.nonzero()].

import numpy as np
import torch

torch.manual_seed(42)

tensor = torch.arange(36).reshape(3, 4, 3)
mask = torch.randint(2, (3, 4), dtype=torch.bool)

arr = tensor.clone().numpy()
mask_arr = mask.clone().numpy()

print(
        'mask and mask.nonzero are equivalent in torch:',
        torch.equal(tensor[mask], tensor[mask.nonzero()]),
        )
print(
        'mask and mask.nonzero are equivalent in numpy:',
        np.array_equal(arr[mask_arr], arr[mask_arr.nonzero()]),
        )

print('tensor[mask]:\n', tensor[mask])
print('arr[mask_arr]:\n', arr[mask_arr])

and the output is:

mask and mask.nonzero are equivalent in torch: False
mask and mask.nonzero are equivalent in numpy: True
tensor[mask]:
 tensor([[ 3,  4,  5],
        [15, 16, 17],
        [27, 28, 29]])
arr[mask_arr]:
 [[ 3  4  5]
 [15 16 17]
 [27 28 29]]

Note

According to the PyTorch docs:

torch.nonzero(..., as_tuple=True) returns a tuple of 1-D index tensors, allowing for advanced indexing …

If we use x[mask.nonzero(as_tuple=True)], then the output is equivalent to x[mask], so I assume that happens under the hood when using boolean tensors for advanced indexing. Is that correct?

Example 2

Another example where PyTorch and NumPy differ is the following:

import torch
import numpy as np

a = torch.arange(9).reshape(3, 3)
b = np.arange(9).reshape(3, 3)

print('x[[[0]]] in torch:\n', a[[[0]]])
print('x[[[0]]] in numpy:\n', b[[[0]]])

and as shown in the output the shapes of the resulting matrices are different:

x[[[0]]] in torch:
 tensor([[0, 1, 2]])
x[[[0]]] in numpy:
 [[[0 1 2]]]

Example 3

This is related to: Indexing with list of booleans differs from numpy · Issue #6773 · pytorch/pytorch · GitHub

import numpy as np
import torch

np.random.seed(43)

a = np.arange(27).reshape(3, 3, 3)
b = np.random.randint(2, size=(3, 3)).astype('bool')
print(a[b.tolist()])

c = torch.tensor(a)
d = torch.tensor(b)
print(c[d.tolist()])

and the output is:

[[ 6  7  8]
 [ 9 10 11]
 [12 13 14]
 [21 22 23]
 [24 25 26]]
tensor([19, 23])

Are there any known cases where PyTorch and NumPy differ with regards to indexing?