I am trying to load a pretrained model and finetuning it. the original ReLU layer has a “inplace=True” option. Since this inplace option is causing some problems in autograd, I change the option to “inplace=False”. What confuses me is that the output of the network is totally different with these two different inplace options. Since ReLU has no trainable parameters, why would this happen?
Below is the module where the problem happened.
class ResidualConvUnit(nn.Module): def __init__(self, features): super().__init__() self.conv1 = nn.Conv2d( features, features, kernel_size=3, stride=1, padding=1, bias=True ) self.conv2 = nn.Conv2d( features, features, kernel_size=3, stride=1, padding=1, bias=True ) self.relu = nn.ReLU(inplace=False) def forward(self, x): out = self.relu(x) out = self.conv1(out) out = self.relu(out) out = self.conv2(out) return out + x