What's the difference between torch.reshape vs. torch.view

It seems like torch.reshape and torch.view do pretty much the same thing, am I missing something? Is there a situation where you would use one and not the other?

reshape will return a view if possible and will trigger a copy otherwise as explained in the docs. If in doubt, you can use reshape if you do not explicitly expect a view of the tensor.

I’m having a hard time understanding what a view is… it’s like a reshaped vector with constraints?

A view points to the same data stored in memory using a changed meta-data such as its shape and stride.
Here is a small example:

x = torch.randn(2, 4)
print(x.size(), x.stride())
# torch.Size([2, 4]) (4, 1)
print(x.is_contiguous())
# True

y = x.view(-1)
print(y.size(), y.stride())
# torch.Size([8]) (1,)
print(y.is_contiguous())
# True

x = torch.randn(2, 4, 8)
z = x[:, ::2]
print(z.size(), z.stride())
# torch.Size([2, 2, 8]) (32, 16, 1)
print(z.is_contiguous())
# False

z1 = z.view(-1)
# RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

z2 = z.reshape(-1)
print(z2.is_contiguous())
# True
print(z2.size(), z2.stride())
# torch.Size([32]) (1,)
4 Likes