This is sort of tangential to your original question, but something like this should work for defining the layers/modules
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(in_channels=1, out_channels=8, kernel_size=(5, 5), stride=1)
self.pool1 = nn.AvgPool2d(kernel_size=(2, 2), stride=2)
self.conv2 = nn.Conv2d(in_channels=8, out_channels=16, kernel_size=(5, 5), stride=1)
self.pool2 = nn.AvgPool2d(kernel_size=(2, 2), stride=2)
self.fc1 = nn.Linear(in_features=16 * 5 * 56, out_features=72)
self.fc2 = nn.Linear(in_features=72, out_features=11)
Then, in your forward
method, you could do something like
def forward(self, x):
x = self.conv1(x)
x = nn.functional.relu(x, inplace=True)
x = self.pool1(x)
x = self.conv2(x)
x = nn.functional.relu(x, inplace=True)
x = self.pool2(x)
x = torch.flatten(x, start_dim=1)
x = self.fc1(x)
x = nn.functional.tanh(x)
x = self.fc2(x)
x = nn.functional.softmax(x)
return x
Please, note that there exist alternative (and arguably better) ways of defining the network and the forward pass. I chose to present the examples this way to better match your table and hopefully be easier to understand. If you want to improve the code, I highly recommend using the nn.Sequential
container module. This would allow you to greatly simplify the forward
method by declaring each “convolutional block” as a single unit, including ReLU and average pooling. I will leave this as an exercise for you to figure out.
Also be aware that I have not tested any of the code, so it is very likely that I have made some mistakes.