nn.Conv2d wouldn’t have the inplace argument (at least not in the torch.nn.Conv2d definition).
The inplace argumen in e.g. nn.Dropout layers (or other functions) will apply the method on the input “inplace”, i.e directly on the values in the same memory locations without creating a new output.
This could save some memory, but might also be disallowed if the inputs are needed to be unmodified for the gradient calculation (inplace operations would also disallow the JIT to fuse operations, if I’m not mistaken).
A small example is given here:
drop = nn.Dropout(p=0.5, inplace=False)
x = torch.randn(1, 5)
print(x)
> tensor([[ 0.4276, 0.5935, -0.0205, 0.2411, -1.3081]])
out = drop(x)
print(out)
> tensor([[0.8551, 1.1870, -0.0000, 0.0000, -0.0000]])
print(x)
> tensor([[ 0.4276, 0.5935, -0.0205, 0.2411, -1.3081]])
drop = nn.Dropout(p=0.5, inplace=True)
out = drop(x)
print(out)
> tensor([[ 0.8551, 0.0000, -0.0410, 0.0000, -0.0000]])
print(x) # also changed
> tensor([[ 0.8551, 0.0000, -0.0410, 0.0000, -0.0000]])
Yes, this is most likely caused by the usage of inplace=True, if the inputs are needed in an unmodified state to calculate the gradients as previously mentioned. This post gives a small example why inplace ops are disallowed for specific (chains of) operations.