In PyTorch 0.4, is it recommended to use `reshape` than `view` when it is possible?

In PyTorch 0.4, is it generally recommended to use Tensor.reshape() than Tensor.view() when it is possible ?

And to be consistent, same with Tensor.shape and Tensor.size()



Tensor.size() is the original function. Tensor.shape was added to be nicer to numpy users. Both are exactly the same.

Tensor.reshape() and Tensor.view() though are not the same.

  • Tensor.view() works only on contiguous tensors and will never copy memory. It will raise an error on a non-contiguous tensor.
  • Tensor.reshape() will work on any tensor and can make a clone if it is needed.

They are similar but different. For example, if you want to change the result inplace and expect the change to reflect on the original Tensor, then you have to use view(). On the other hand, if you don’t need this behaviour, you can use reshape so that you don’t have to worry that the input has to be contiguous.


Thanks a lot for the explanation !