Parallel architectures not registered

Hi!
I’m trying to create a network with multiple parallel architectures, that in the end are joined by common layers. However, the parallel architectures’ parameters are not registered with the model. What’s the best way to do that so they are registered automatically, or register them manually?
The number of the parallel architectures may vary, so I can’t just give each Sequential its own field.

class Dummy(nn.Module):
    def __init__(self):
        super(Dummy, self).__init__()
        self.model = []
        for _ in range(3):
            layers = []
            for i in range(5):
                layers.append(nn.Linear(28**2, 28**2))
                layers.append(nn.Tanh())
            self.model.append(nn.Sequential(*layers))
        self.linear = nn.Linear(28**2 * 5, 10)
    
    def forward(self, x):
        channels_out = []
        for c in self.model:
            channels_out.append(c(x))
        out = self.linear(torch.cat(channels_out, dim=-1))
        return out

Anything that is a field on your Dummy class will be registered. So in your case you could just replace layers with self.layers and make sure to only initialize it once. (you could also create nested lists instead of putting all layers from all models into a single list).