Guidelines for when and why one should set inplace = True?

I know what inplace does, but, can you please explain why should I opt to use it or not. Are there any caveats and what are the scenarios in which this option is important, sepecifically when building a NN. Take ReLU layer as an example in which this option is available.

3 Likes

Hello,

First, there is an important thing you have to consider; you only can use inplace=True when you are sure your model won’t cause any error. For example, if you trying to train a CNN, in the time of backpropagation, autograd needs all the values, but inplace=True operation can cause a change so your backprop is no longer valid. Actually, this kind of error has been handled by PyTorch, so you’ll be noticed about it.

Second, if you do not have any error, it is better to use inplace=True operation because it won’t allocate new memory for the output of your layer. So it can prevent from Out of memory error.

Finally, as far as I know developers usually use inplace=True unless they do not get any error.

bests

2 Likes

Indeed @Nikronic nails it with the rule of thumb You can use inplace for memory efficiency unless you it breaks.
You might also be less eager to use inplace when planning to use the JIT, as it will fuse pointwise non-inplace operations like ReLU if there are several in a row.

The two things to avoid are:

  • you move a leaf tensor into the graph (using inplace on something that you just defined with requires_grad=True),
  • when the operation before the inplace wants to have its result to compute the backward. Whether this is the case is not easy to tell with from the outside, unfortunately.

As a corollary, you should avoid using inplace on the inputs of your re-usable module in lest the future use could be in one of the two situations.

Best regards

Thomas

3 Likes