I think you should remove all the dimensions of size 1 to make this clearer.
But you basically take each column of the Tensor and then stack them along the last dimension, making them rows.
thank you, yes that is what the output says, but I am having trouble understanding why they become rows, especially if I view the initial layout as having two channel, then why would those be restructured in such a way for elements on identical positions across channels to become elements in the same rows?
This is because usually, the concept of row/column correspond to the last two dimensions of a Tensor.
And when you add a new dimension, what used to be rows become columns. And if you stack stuff on that new dimension, they are columns.