nn.ModuleList() or Not?

Hi friends!:blush:

I am currently creating a model and I am in the process of optimizing it. I am used to using ModuleLists and appending layers, but I am not sure this time. Is this more efficient than creating all the layers one by one? Why? Is the loop affecting my running time significantly?

        self.conv2d_layers = nn.ModuleList()
        self.batch_norm_layers = nn.ModuleList()
        channels = [in_channels, 64, 128, 256, 512]
        current_size = img_size
        for i in range(len(channels) - 1):
            self.conv2d_layers.append(nn.Conv2d(channels[i], channels[i+1], kernel_size, stride, padding))
            self.batch_norm_layers.append(nn.BatchNorm2d(channels[i+1]))

[MORE CODE HERE]

    def forward(self, x: torch.Tensor):
        for conv, norm in zip(self.conv2d_layers, self.batch_norm_layers):
            x = self.activation(norm(conv(x)))
            x = self.dropout_2d(x)
        x = self.flatten(x)
        x = self.dropout(self.activation(self.fc1(x)))
        x = self.fc2(x)

Thanks in advance!

I doubt it makes a difference in Python to use the loop or to flatten the execution as I don’t think any optimizations would be applied and you should see approx. the same overhead. The loop itself could add a tiny overhead, but again I don’t think it’s noticeable.