It’s a bit tricky for me to explain it in words, but it does not directly swap two dimensions but rearranges the values so that it adheres to the new dimensions. Maybe the easiest way to explain it would be with an analogy like transpose. I.e.,
a = torch.tensor([[1, 2, 3],
[4, 5, 6]])
a.view(3, 2)
# tensor([[ 1, 2],
# [ 3, 4],
# [ 5, 6]])
but
a.transpose(0, 1)
# tensor([[ 1, 4],
# [ 2, 5],
# [ 3, 6]])
Or maybe think of it as creating a new empty tensor with the new dimensions specified via the view
arguments and then gradually filling it in order using the original values.