Tensor slice to zero index at high dim in PyTorch 0.4.0

a = torch.Tensor(10, 10)
print(a[:0])
print(a[:, :0])  # should be a tensor with no data
print(a[:, :1])

The outputs are:

tensor([])
tensor(1.00000e-27 *
       [ 0.0000,  0.0000,  5.6456,  0.0252])
tensor(1.00000e-27 *
       [[ 0.0000],
        [ 0.0000],
        [ 5.6456],
        [ 0.0252]])

The inconsistent behavior with numpy is that a[:, :0] is not tensor([]) in PyTorch but a 1-dim tensor (dimension reduced by 1) with the zero-index data of the slicing dim.

The first dim works as expected, however.

@wandering007 there’s a pull request open to patch this: https://github.com/pytorch/pytorch/pull/7775 . A full fix won’t be around until pytorch supports arbitrary zero-dimensions in tensors (this should happen sometime in a few months).