# What exactle does "inplace" do when set to True/False?

As the title says, what’s the difference when setting `inplace = True` in `nn.ReLU` and `nn.Dropout`? Would it affect training in some way?

Edit: If I were to use `nn.Conv2d` or `nn.Dropout` in `nn.Sequential`, it wold be better to use `inplace=True`, correct?

Edit 2: I meant `nn.ReLU`, not `nn.Conv2d`.

`nn.Conv2d` wouldn’t have the `inplace` argument (at least not in the `torch.nn.Conv2d` definition).
The `inplace` argumen in e.g. `nn.Dropout` layers (or other functions) will apply the method on the input “inplace”, i.e directly on the values in the same memory locations without creating a new output.
This could save some memory, but might also be disallowed if the inputs are needed to be unmodified for the gradient calculation (inplace operations would also disallow the JIT to fuse operations, if I’m not mistaken).
A small example is given here:

``````drop = nn.Dropout(p=0.5, inplace=False)

x = torch.randn(1, 5)
print(x)
> tensor([[ 0.4276,  0.5935, -0.0205,  0.2411, -1.3081]])

out = drop(x)
print(out)
> tensor([[0.8551, 1.1870, -0.0000, 0.0000, -0.0000]])
print(x)
> tensor([[ 0.4276,  0.5935, -0.0205,  0.2411, -1.3081]])

drop = nn.Dropout(p=0.5, inplace=True)
out = drop(x)
print(out)
> tensor([[ 0.8551,  0.0000, -0.0410,  0.0000, -0.0000]])
print(x) # also changed
> tensor([[ 0.8551,  0.0000, -0.0410,  0.0000, -0.0000]])
``````

My apologies, I meant `nn.ReLU`. I tried using `nn.Dropout` with `inplace=True` in `nn.Sequential` and got an error saying:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

Maybe this is caused by setting `inplace=True`?

Yes, this is most likely caused by the usage of `inplace=True`, if the inputs are needed in an unmodified state to calculate the gradients as previously mentioned.
This post gives a small example why inplace ops are disallowed for specific (chains of) operations.