Output values of Module are incorrect if sub-module instance variable is lower-cased

I have the following small RRDB module network:

class RRDB(nn.Module):
    """Residual in Residual Dense Block."""

    def __init__(
        self,
        nc: int,
        kernel_size: int = 3,
        gc: int = 32,
        stride: int = 1,
        bias: bool = True,
        pad_type: Optional[blocks.PAD_TYPES_T] = "zero",
        norm_type: Optional[blocks.NORM_TYPES_T] = None,
        act_type: Optional[blocks.ACT_TYPES_T] = "leakyrelu",
        mode: blocks.CONV_MODE_T = "CNA",
        plus: bool = False
    ):
        super().__init__()
        self.RDB1 = ResidualDenseBlock5C(nc, kernel_size, gc, stride, bias, pad_type, norm_type, act_type, mode, plus)
        self.RDB2 = ResidualDenseBlock5C(nc, kernel_size, gc, stride, bias, pad_type, norm_type, act_type, mode, plus)
        self.RDB3 = ResidualDenseBlock5C(nc, kernel_size, gc, stride, bias, pad_type, norm_type, act_type, mode, plus)

    def forward(self, x):
        out = self.RDB1(x)
        out = self.RDB2(out)
        out = self.RDB3(out)
        # Empirically, we use 0.2 to scale the residual for better performance
        return out.mul(0.2) + x

Within it, you see I create three RDB block instances within it. If I keep them as self.RDBn it works correctly. If I lowercase them as self.rdbn, it returns incorrect output data noticeably.

I have absolutely no idea what could be causing this. I looked and the initialized structure of the instances matches both ways, but the output in forward does not.

Here’s a comparison of both outputs. I’m consistently getting an increase in green, even on sources that are mute of color.

The reason this occurred is because the PyTorch state/model pth file had the param name as uppercase, and it had to match. Therefore, when it ran the RRDB layer it did not have the weights applied.