What should I use for declaring a layers __init__()?

I used to use declaring all ReLU and MaxPooling2d layers in __init__() part of the model. However these these two functions have no learnable parameters? So Do I need to declare only Conv2d and BatchNorm2d in the __init__() part of the model? This below example is one of my code snippet of creating a Conv by stackin Convolutional, Normalization and activation functions:

class Conv(nn.Module):
    def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=1, dilation=1, groups=1,
                 bias=False) -> None:
        self.conv = nn.Conv2d(in_channels=in_channels,
        self.norm = nn.BatchNorm2d(num_features=out_channels)
        self.relu = nn.ReLU(inplace=False)

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        x = self.conv(x)
        x = self.norm(x)
        x = self.relu(x)

        return x

I just want to know what is the best practices for declaring nn.MaxPooling2d and nn.ReLU? Is it inside __init__() or outside of it?

I think it depends on your use case and eventually also depends on your coding style.
E.g. if you plan to replace specific layers later, using the module approach might be easier than manipulating the forward method. This post explain the different approaches in more detail.

1 Like

Thanks for the reply and that post. It was really helpful.