New viewed tensor could have a different size compared to origin tensor?

I read this in tensor.view() function documentation.

Could you take this example?

I tried but I got error

z
Out[12]:
tensor([[ 0.9739, 0.6249],
[ 1.6599, -1.1855],
[ 1.4894, -1.7739],
[-0.8980, 1.5969],
[-0.4555, 0.7884],
[-0.3798, -0.3718]])
z = x.view(1, 2)
Traceback (most recent call last):
File “D:\temp\Python35\lib\site-packages\IPython\core\interactiveshell.py”, line 2961, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File “”, line 1, in
z = x.view(1, 2)
RuntimeError: invalid argument 2: size ‘[1 x 2]’ is invalid for input with 12 elements at …\aten\src\TH\THStorage.cpp:84

Hi,

What is x is that example?

Anyway view is like reshape in numpy (with additional considerations I am not familiar with) but if you call:

x.view(shape)

then their should be as many elements in x as in a tensor with size()==shape. (to go from a 3D tensor of size C,H,W to a 2D tensor of size()==shape then shape[0] x shape[1] == CHW)

x is just a tensor. I understood same as you said about torch.Tensor.view() .

my question is from here.

“The returned tensor shares the same data and must have the same number of elements, but may have a different size”

As @el_samou_samou explained, the number of elements stays the same, while the size may differ.
Here is a small example:

x = torch.randn(2, 2, 2, 2)
print(x.size())
x = x.view(-1, 2, 1)
print(x.size())
x = x.view(-1)
print(x.size())

While the underlying data of x stays the same, its size changes after each view call.
Nonetheless the number of elements is the same.
A call like x.view(2) won’t work, as we have 16 elements.

1 Like

Now I understand! I understood it as total size could be changed. Thanks for example