Convnet preserve dimensions

Hi!
I have a CNN:

class ConvNet(nn.Module):
    def __init__(self, num_classes):
        super(ConvNet, self).__init__()
        self.layer1 = nn.Sequential(
            nn.Conv2d(1, 4, kernel_size=(1, 5), stride=1, padding=(2, 2)),
            nn.LeakyReLU(),
            nn.MaxPool2d(kernel_size=(1, 3)))
        self.layer2 = nn.Sequential(
            nn.Conv2d(4, 8, kernel_size=(1, 5), stride=1, padding=(2, 2)),
            nn.LeakyReLU(),
            nn.MaxPool2d(kernel_size=(1, 3)))
        self.layer3 = nn.Sequential(
            nn.Conv2d(8, 16, kernel_size=(1, 5), stride=1, padding=(2, 2)),
            nn.LeakyReLU(),
            nn.MaxPool2d(kernel_size=(1, 3)))
        self.fc1 = nn.Flatten()
        self.fc2 = nn.Linear(2240, 64)
        self.fc3 = nn.Linear(64, num_classes)

    def forward(self, x):
        out = self.layer1(x)
        out = self.layer2(out)
        out = self.layer3(out)
        out = self.fc1(out)
        out = self.fc2(out)
        out = self.fc3(out)
        return out

My input dimension(1 batch) is of shape:

torch.Size([1, 1, 128, 40])

I want to preserve the shape of (128, 40) before it goes into fc1. But I guess I’m doing the padding incorrectly since the shape after layer1 is:

torch.Size([1, 4, 132, 13])

Also, I need the convolutions and maxpool along each rows only. Can you all please suggest how I can do that?
Also, for the input dimension to fc2(linearlayer → 2240 here) is it possible to define the value independent of the batch size?

Thanks a lot!

Well indeed if you use maxpool you need much more padding. There is a pytorch pad function which you can apply after each maxpool if that’s what you really want. Anyway I don’t find why would you like to maxpool if you want to preserve the shape. You may want to use dilated convolutions instead.