Commented models in a model do change the output. why?

During some test I have a model this way :

class Downsampler(nn.Sequential):
    def __init__(self, nFeat, nBlock):
        super().__init__()
        self.nBlock = nBlock

        self.RDBS = nn.ModuleList()
        for i in range(nBlock):
            self.RDBS.append(RDB(nFeat, nBlock, 16))

        # # global feature fusion (GFF)
        self.GFF_1x1 = nn.Conv2d(nFeat * nBlock, nFeat, kernel_size=1, padding=0, bias=False)
        self.GFF_3x3 = nn.Conv2d(nFeat, nFeat, kernel_size=3, padding=1, bias=False)

        self.filter = nn.Sequential(
            nn.Conv2d(nFeat, nFeat, kernel_size=7, padding=3, bias=False, groups=nFeat)
        )

        self.Gauss = GaussianLayer(layers=3, k=21, sigma=3)

    def forward(self, x):

        # RDB_outs = []
        # for i in range(self.nBlock):
        #     out = self.RDBS[i](x)
        #     RDB_outs.append(out)
        # FF = torch.cat(RDB_outs, 1)
        # FdLF = self.GFF_1x1(FF)

        filterd = self.Gauss(x)

        DFdLF = nn.functional.interpolate(filterd, scale_factor=0.5, mode='bicubic', align_corners=False)


        # FGF = self.GFF_3x3(DFdLF)
        # FDF = FGF + x

        return DFdLF

As you can see I kept them not commented in the model but in the forward. After the I found that the model work better with this implementation I deleted all the non commented models in the model and the commented lines. To have this :

class Downsampler(nn.Sequential):
    def __init__(self, nFeat, nBlock):
        super().__init__()
        self.nBlock = nBlock

        self.Gauss = GaussianLayer(layers=3, k=21, sigma=3)

    def forward(self, x):

        filterd = self.Gauss(x)

        DFdLF = nn.functional.interpolate(filterd, scale_factor=0.5, mode='bicubic', align_corners=False)

        return DFdLF

I found that the results are different after deleting these lines even if they exist earlier but not used.

Most likely you just see some effects of different parameter initializations.
Since the modules were initialized in __init__, the random number generator was called, thus sampled other values for all following modules.
Even if you set a seed, you’ll see the effect.

Could you remove the unused modules, run the experiment for a few different seeds, and compare the results?

This is possible, but I have to mentioned that outputs are really different. I’ll check with random seed and update the post