Get different output after changging inplace to False in pretrained models in ReLU

I am trying to load a pretrained model and finetuning it. the original ReLU layer has a “inplace=True” option. Since this inplace option is causing some problems in autograd, I change the option to “inplace=False”. What confuses me is that the output of the network is totally different with these two different inplace options. Since ReLU has no trainable parameters, why would this happen?

Below is the module where the problem happened.

class ResidualConvUnit(nn.Module):

    def __init__(self, features):

        self.conv1 = nn.Conv2d(
            features, features, kernel_size=3, stride=1, padding=1, bias=True
        self.conv2 = nn.Conv2d(
            features, features, kernel_size=3, stride=1, padding=1, bias=True
        self.relu = nn.ReLU(inplace=False)

    def forward(self, x):
        out = self.relu(x)
        out = self.conv1(out)
        out = self.relu(out)
        out = self.conv2(out)
        return out + x

What kind of data is the network working with here?
From my experience Setting the relu inplace to true modifies the input features inplace rather than allocate new memory to a copy of the input and also autograd doesn’t really work well with relu Inplace = True so I don’t see any issues here can u pls upload ur training process too

As @Henry_Chibueze pointed out, the input x would be modified inplace, which would thus yield different results in return out + x.
You could either clone() x before passing it to the first relu:

out = self.relu(x.clone())

or just don’t use the inplace version.