I’m trying to understand `tensor.stride()`

.

Let’s compare three tensors with the same underlying data: `[0, 1, 2]`

.

Making a tensor `T`

of shape `(3,1)`

(i.e. a ‘column vector’ x a singleton dimension) from that data will have `stride=(1,3)`

. This makes sense because moving from component `T[i,j]`

to `T[i+1,j]`

corresponds to ‘going 1 forward’ in the underlying data `[0,1,2]`

. Similarly, moving from `T[i,j]`

to `T[i,j+1]`

means ‘going 3 forward’, when we start at the beginning again after reaching the end of the underlying data.

For a tensor of shape `(1,3)`

(‘row vector’ x singleton dimension), we get `stride=(3,1)`

. This also makes sense because now moving from `T[i,j]`

to `T[i+1,j]`

means ‘going 3 forward’ in the underlying data, and analogously ‘going 1 forward’ for the second dimension.

However, I don’t understand the case for shape `(3,1,1)`

(‘column vector’ x singleton dimension x singleton dimension). By the above reasoning, the stride should look like this: `stride=(1,3,3)`

because moving from `T[i,j,k]`

to `T[i,j+1,k]`

would require ‘going 3 forward’ in the data `[0,1,2]`

. However, the actual stride is `[1,1,1]`

. How does this make sense?