Hi, guys,

I met a tensor shape issue when doing indexing.

Suppose I have a tensor: `a = torch.randn(2, 3, 4)`

`a[[0,]]`

's shape is (1, 3, 4)

`a[:, [0,]]`

's shape is (2, 1, 4)

then why `a[[0,], [0,]]`

's shape is (1, 4) instead of (1, 1, 4)?

Hi, guys,

I met a tensor shape issue when doing indexing.

Suppose I have a tensor: `a = torch.randn(2, 3, 4)`

`a[[0,]]`

's shape is (1, 3, 4)

`a[:, [0,]]`

's shape is (2, 1, 4)

then why `a[[0,], [0,]]`

's shape is (1, 4) instead of (1, 1, 4)?

_{Edit: misread, ignore this.}

`a[0, 0]`

is just short for `a[0][0]`

. The “`:`

” you’ve got there is a slice operator, which will give you the every index instead of just one index. It’s not entirely unlike string indexing, imagine instead of just getting one letter you get a substring.

Hi Dio!

Two pieces of behavior combine together to produce the results you

are seeing.

First, the trailing dimensions of `a`

that you are not indexing are simply

carried along unchanged. Another way of saying this is that the trailing

dimensions are implicitly sliced with full slices.

The dimensions that you index with advanced indexing get replaced

with the shape of the index tensors (or in your example, index lists).

The index tensors used for advanced indexing must have matching

(or broadcastable) shapes.

So for `a[[0], [0]]`

you keep the implicitly-full-sliced third dimension

of size `4`

, and because your two advanced-indexing lists have matching

shape `[1]`

, the first two dimensions are replaced with that shape of `[1]`

.

Hence` a[[0], [0]].shape`

becomes `[1, 4]`

.

Consider:

```
>>> import torch
>>> torch.__version__
'2.3.1'
>>> a = torch.ones (2, 3, 4)
>>> # a[<index>, <implicit slice>, <implicit slice>]
>>> a[[0]].shape
torch.Size([1, 3, 4])
>>> a[[0], :, :].shape
torch.Size([1, 3, 4])
>>> # a[<explicit slice>, <index>, <implicit slice>]
>>> a[:, [0]].shape
torch.Size([2, 1, 4])
>>> a[:, [0], :].shape
torch.Size([2, 1, 4])
>>> # a[<index>, <index>, <implicit slice>]
>>> a[[0], [0]].shape
torch.Size([1, 4])
>>> a[[0], [0], :].shape
torch.Size([1, 4])
>>> # shape of advanced indices determines shape of result
>>> a[[0, 1, 0], [0, 1, 2]].shape
torch.Size([3, 4])
>>> a[[[0, 1, 0], [1, 0, 1], [0, 0, 1], [0, 1, 1]], [[0, 1, 2], [1, 2, 0], [2, 0, 1], [2, 1, 0]]].shape
torch.Size([4, 3, 4])
```

Best.

K. Frank

1 Like

I’m using `a[[0], [0]]`

rather than `a[0, 0]`

.

Thanks, that solved my question.

but using a[0, 0] is still gonna give you a shape of (2, 3, 4) where second element of the first dimension will be the copy of the first element , if you want to get a shape of (1, 1, 4) then use a[:1, [n]]