input = torch.Tensor(2, 4, 3) # input: 2 x 4 x 3
print(input.unsqueeze(0).size()) # prints - torch.size([1, 2, 4, 3])

Use of view():

input = torch.Tensor(2, 4, 3) # input: 2 x 4 x 3
print(input.view(1, -1, -1, -1).size()) # prints - torch.size([1, 2, 4, 3])

According to documentation, unsqueeze() inserts singleton dim at position given as parameter and view() creates a view with different dimensions of the storage associated with tensor.

What view() does is clear to me but I am unable to distinguish it from unsqueeze(). Moreover, I am not understanding when to use view() and when to use unsqueeze()?

Any help with good explanation would be appreciated!

You cannot use the view function as you have written it - only one of the missing dimensions can be inferred (not more than one as you have written). This means that when adding a new axis to a tensor using view you have to specify all the other dimensions manually (except maybe 1). squeeze & unsqueeze pair of functions are utilities that make this very convenient, wherein we just specify where we want add or remove an axis.

Also, in the latest versions of PyTorch you can add a new axis by indexing with None as:

What about using resize_ or unsqueeze?
Is there a difference between those two?

If a have a tensor s=(300) and I want it to be (1,1,300)
Shall I use s.resize_(1,1,300) instead of using unsqueeze multiple times?
Does this give the same result?

In pytorch 0.4, I get the same result for the input

input = torch.Tensor(2, 4, 3)

with
print(input.view(1, 2, 4, 3).size())

and

print(input.unsqueeze(0).size())

There are no difference between unsequeeze() and view(), if the both are used correctly.
They do not change the data storage in tensor (.storage());
they have the same id as well ( print(id(input.unsqueeze(0).storage)) ).

It seems to me that the two (unsqueeze, view) change only the representation of a tensor.

In [1]: import torch
In [2]: im = torch.Tensor(40,40)
In [3]: im.size()
Out[3]: torch.Size([40, 40])
In [4]: im.view(1,1,-1).size()
Out[4]: torch.Size([1, 1, 1600])

im.view(1,1,-1,-1) throws an error. Best I came up with is im.view((1,1) + im.size()).size(), but that just looks ugly.

view() takes a tensor and reshapes it. A requirement being that the product of the lengths of each dimension in the new shape equals that of the original. Hence a tensor with shape (4,3) can be reshaped with view to one of shape:

(1,12), (2,6), (3,4), (6,2), (12,1)

but also, any number of superficial dimensions of length 1 can be removed (i.e. view(12)), or added (e.g. (2,6,1), (3,1,1,4), (1,4,1,3,1) etc).

squeeze and unsqueeze are convenient synonyms for these latter two special cases where that is the only change in shape.

Use view (or reshape) when you want to generically reshape a tensor.

If you want to specifically add a superficial dimension (e.g. for treating a single element like a batch, or to concatenate with another tensor), unsqueeze is a more convenient (and explicit) synonym, but the underlying operation is the same.

Note:view(1, -1, -1, -1) will not work (-1 can only be used once to infer one dimension’s size). What you want to do can be achieved with view(1, *input.shape).