How to build a CNN model with PyTorch?

I am trying to build a CNN with the following depth and parameters:

  • Convolution Layer 1: 3 input channels, 16 output channels, 3x3 kernel
  • Convolution Layer 2: 16 input channels, 24 output channels, 4x4 kernel
  • Convolution Layer 3: 24 input channels, 32 output channels, 4x4 kernel
  • Fully connected Layer 1: * input channels, 512 output
  • Fully connected Layer 2: 512 input channels, 10 output channels

() - The input size of the first connected layer is calculated to be 2929*32 (flattened previous layer).

Each convolution layer is followed by a ReLU and a max-pool operation with kernel size 2x2. After each sequence of convolution, ReLU and max-pool, a dropout operation with p=0.3 is added.

I have written the following code, however I am not entirely sure whether I have correctly implemented the design described above.

class ConvNet(nn.Module):

    def __init__(self, num_classes=10):
        super(ConvNet, self).__init__()
        self.conv1 = nn.Conv2d(3, 16, 3)
        self.conv2 = nn.Conv2d(16, 24, 4)
        self.conv3 = nn.Conv2d(24, 32, 4)
        self.maxPool = nn.MaxPool2D(2)
        self.dropout = nn.Dropout2D(p=0.3)
        self.relu = nn.ReLU()
        self.fc1 = nn.Linear(29*29*32,512)
        self.fc2 = nn.Linear(512, num_classes)
        self.final = nn.Softmax(dim=1)

    def forward(self, x):
        x = self.conv1(x)
        x = self.relu(self.conv1(x))
        x = self.maxPool(x)
        x = self.dropout(x)
        x = self.conv2(x)
        x = self.relu(self.conv2(x))
        x = self.maxPool(x)
        x = self.dropout(x)
        x = self.conv3(x)
        x = self.relu(self.conv3(x))
        x = self.maxPool(x)
        x = self.dropout(x)
        x = x.reshape(x.size(0), -1)
        x = self.fc1(x)
        x = self.fc2(x)
        x = self.final(x)
        return x

Any advice would be greatly appreciated as although I have prior Python experience, I am very inexperienced in AI/ML methods.

The code looks generally great! :slight_smile:
Some minor issues:

  • Based on the architecture it seems you are working on a multi-class classification use case. If that’s the case, you should remove the nn.Softmax as the last activation function and use nn.CrossEntropyLoss, which will internally apply F.log_softmax and nn.NLLLoss.
  • This statement is currently not true, as you are using reusing your conv layers:
        x = self.conv1(x)
        x = self.relu(self.conv1(x))

This code will use self.conv1 twice. I’m not sure, if that’s your use case, but it doesn’t match your explanation. :wink: