# Permute/reshape, view difference

Hello, so in my code I have a tensor of size[1,8,64,1024].
Let’s say I want to reshape it to its original size, that is [1,512,1024].

So I want to “integrate” (this is not exactly the word) 8x64 dimensions to one dim of 512.
I used `view(*(1,512,1024))` to get from [1,8,64,1024] back to [1,512,1024].
But then I was experimenting to understand torch functions and then with
`permute(0, 2, 1, 3)` followed by `reshape(1, 512, 1024)` I had the same result.

The results I get are equal, checking with torch.eq(). But what is better to use for less complexity ?

Thanks a lot

I’m confused that you get the same results here. (e.g., in this code snippet the results are clearly not the same)

``````\$ cat temp.py
import torch

a = torch.randn(1, 8, 64, 1024)
b = a.reshape(1, 512, 1024)
c = a.permute(0, 2, 1, 3).reshape(1, 512, 1024)
a = a.view(1, 512, 1024)

print(torch.allclose(a,b))
print(torch.allclose(b,c))
\$ python3 temp.py
True
False
\$
``````

In summary permute is very different from view and reshape in that it actually changes the data layout or ordering of elements (e.g., consider what happens as you access each element by incrementing the last index by 1).

This post For beginners: Do not use view() or reshape() to swap dimensions of tensors! - PyTorch Forums is a great intro to the pitfalls of using `view` or `reshape` when the intent is to change the ordering of elements.

1 Like