Why inPlace is by default set to False for nn.LeakyRelu

By default parameter inPlace for nn.LeakyRelU has been set to False. I was wondering what are the repercussions for that, and why that is the default behavior?

I believe if we do not explicitly set it to True then a bit more memory would be used as a copy of that particular layer would need to be saved. LeakyRelu should not have any parameters (right?), so ideally IMHO inplace=True should be default behavior. Why would anyone want it to be False?

Inplace operations can overwrite values required to compute gradients and might thus yield an error message such as:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1024]], which is output 0 of LeakyReluBackward1, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

These limitation is also described here and you should thus be careful with its usage.
The default behavior of setting inplace=False would work without raising any errors and you are of course free to check, if inplace=True would work for your model. If PyTorch doesn’t raise any errors, the gradient calculation will be right.

1 Like