Tensor.view is misleading

After reading the documentation for Tensor.view,
https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view
it seems like it could be used to swap two dimension in a tensor, but this is not the case:

a=torch.randn((1,2,3,4))

# Swap 2nd and 3rd dimensions using split and stack
b = torch.stack(a.split(1, 1), 3).squeeze(1)  # has shape (1, 3, 2, 4)

# Swap 2nd and 3rd dimensions using tensor.view
c = a.view((1, 3, 2, 4))  # has shape (1, 3, 2, 4)

(b == c).all()  # False

Am I misunderstanding the documentation? What exactly does tensor.view() do in the example above?

2 Likes

It’s a bit tricky for me to explain it in words, but it does not directly swap two dimensions but rearranges the values so that it adheres to the new dimensions. Maybe the easiest way to explain it would be with an analogy like transpose. I.e.,

a = torch.tensor([[1, 2, 3],
                   [4, 5, 6]])

a.view(3, 2)

# tensor([[ 1,  2],
#         [ 3,  4],
#         [ 5,  6]])

but

a.transpose(0, 1)

# tensor([[ 1,  4],
#         [ 2,  5],
#         [ 3,  6]])

Or maybe think of it as creating a new empty tensor with the new dimensions specified via the view arguments and then gradually filling it in order using the original values.

13 Likes

So reshape from numpy is the same as view from torch?

3 Likes

So reshape from numpy is the same as view from torch?

Yes, that’s correct. View is maybe even a bit more clear, because it’s just changing the view onto that array, not the way how it is laid out in memory (in both cases, NumPy and PyTorch).

8 Likes