What is `in-place operation`?

I came across the term in-place operation in the
documentation http://pytorch.org/docs/master/notes/autograd.html What does it mean?

5 Likes

Hi,

An in-place operation is an operation that changes directly the content of a given Tensor without making a copy. Inplace operations in pytorch are always postfixed with a _, like .add_() or .scatter_(). Python operations like += or *= are also inplace operations.

17 Likes

I initially found in-place operations in the following PyTorch tutorial:

Adding two tensors

import torch

>>> x = torch.rand(1)
>>> x

 0.2362
[torch.FloatTensor of size 1]


>>> y = torch.rand(1)
>>> y

 0.7030
[torch.FloatTensor of size 1]

Normal addition

# Addition of two tensors creates a new tensor.
>>> x + y

 0.9392
[torch.FloatTensor of size 1]


# The value of x is unchanged.
>>> x

 0.2362
[torch.FloatTensor of size 1]

In-place addition

# An in-place addition modifies one of the tensors itself, here the value of x.
>>> x.add_(y)

 0.9392
[torch.FloatTensor of size 1]


>>> x

 0.9392
[torch.FloatTensor of size 1]
13 Likes

So in this tutorial, the way the network constructed in the forward method is in-place operation right?
http://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py

I understand that x.add_(y) is an in-place operation.
Is x = x + y in-place and will it cause any problem for autograd?

Hi,

No x = x + y is not inplace. x += y is inplace.

3 Likes

What is the difference? they both will modify x?

Yes true they both modify x. But in-place operation does not allocate new memory for x.

Eg. Normal operation vs In place operation

>>> x = torch.rand(1)
>>> y = torch.rand(1)
>>> x
tensor([0.2738])
>>> id(x)
140736259305336
>>> x = x + y   # Normal operation
>>> id(x)
140726604827672 # New location
>>> x += y
>>> id(x)
140726604827672 # Existing location used (in-place)
17 Likes

Thanks, that make sense

Thanks, this is a good explanation.