In pytorch, view(), permute(), contiguous() and other functions do operation in-place or allocate new memory block?

I really want to know that in Pytorch, functions such like view(), permute(), contiguous() operate the Tensor in-place or they will allocate new memory block to store the result.
Recently, I encounter that my data needs lot of memory, so I need to reduce memory usage in the forward process, and knowing the answer of the question above is of great help.
Thanks a lot

1 Like

Hi,

It should be mentioned in the doc.

  • view never allocates memory (but only works on contiguous Tensor)
  • permute never allocates memory (but returns a non-contiguous Tensor)
  • contiguous allocates only if the input is non-contiguous. Otherwise it is a no-op.
1 Like

Thanks, you help a lot