# Why does stack stach they way it does?

Given a 4-dim tensor X

`X = torch.tensor([[[[0, 2]], [[1, 3]]]])`

of shape

` torch.Size([1, 2, 1, 2])`

and stack, which stacks along a new dimension (here the last)
`Y = torch.stack((X[:, :1], X[:, 1:]), dim= -1)`

results in a shape

`Y.shape torch.Size([1, 1, 1, 2, 2])`

but why are the elements arranged in this way?

``````Y tensor([[[[[0, 1],
[2, 3]]]]])
``````

I am thinking about explanations as to why this specific order results, but cant seem to find one

Hi,

I think you should remove all the dimensions of size 1 to make this clearer.
But you basically take each column of the Tensor and then stack them along the last dimension, making them rows.

thank you, yes that is what the output says, but I am having trouble understanding why they become rows, especially if I view the initial layout as having two channel, then why would those be restructured in such a way for elements on identical positions across channels to become elements in the same rows?

This is because usually, the concept of row/column correspond to the last two dimensions of a Tensor.
And when you add a new dimension, what used to be rows become columns. And if you stack stuff on that new dimension, they are columns.

1 Like

thank you, so the logic is basically working your way up from the right hand side always