I used to use declaring all ReLU
and MaxPooling2d
layers in __init__()
part of the model. However these these two functions have no learnable parameters? So Do I need to declare only Conv2d
and BatchNorm2d
in the __init__()
part of the model? This below example is one of my code snippet of creating a Conv by stackin Convolutional, Normalization and activation functions:
class Conv(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=1, dilation=1, groups=1,
bias=False) -> None:
super().__init__()
self.conv = nn.Conv2d(in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
stride=stride,
padding=padding,
dilation=dilation,
groups=groups,
bias=bias)
self.norm = nn.BatchNorm2d(num_features=out_channels)
self.relu = nn.ReLU(inplace=False)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.conv(x)
x = self.norm(x)
x = self.relu(x)
return x
I just want to know what is the best practices for declaring nn.MaxPooling2d
and nn.ReLU
? Is it inside __init__()
or outside of it?