Do dummy layers impact on the result?

Hello,
I am new to Pytorch. I have a network that gets two images as inputs (an RGB version and a DSM version). When I build my network with layers only dealing with the RGB image, it works fine but, when I try to add layers which deal with the DSM data and then link them to the RGB layers, the accuracy drops dramatically.

Thus, I tried to put the DSM’s layers without linking them to the main branch. Thus, they are not supposed to impact on the final result, these are dumb layers. However, the drop of accuracy is still there. Is that normal ? Why does it happen ? Thank you

A snippet of my code :

def forward(self, x, z):  # x = rgb, z = dsm
    # Initial block

    x = self.conv1(x)
    x = self.bn1(x)
    x = self.relu(x)
    x = self.maxpool(x)

    z = self.conv1_dsm(z)
    z = self.bn1(z)
    z = self.relu(z)
    z = self.maxpool(z)
   ###z is dummy here
    # x=x+z
    # Encoder blocks
    e1_dsm = self.encoder1_dsm(z)
    e1 = self.encoder1(x)
    # e1 = e1 + e1_dsm

Are you returning e1_dsm or just throwing it away?
If you don’t add it to some part of the x operation, it shouldn’t get any updates.

1 Like