Model builiding technique

Hello all. I’m new to pytorch and now playing with autoencoders on mnist. I find myself repeatingly writing pieces like following:

class MyModel(nn.Module):
    def __init(__):
        super(MyModel, self).__init__()
        self.conv1 = nn.Conv2d(1, 8, 3, 2)
        self.conv2 = nn.Conv2d(8, 16, 3, 2)
        self.conv3 = nn.Conv2d(16, 32, 3, 2)

        f = 32
        s = 3
        self.lin1 = nn.Linear(f*s*s, f*s*s)

        self.tconv3 = nn.ConvTranspose2d(32, 16, 3, 2)
        self.tconv2 = nn.ConvTranspose2d(16, 8, 3, 2)
        self.tconv1 = nn.ConvTranspose2d(8, 1, 3, 2) 

You see, there is a tremendous amount of repetition here. Mainly because constructor of every layer should be provided information about shape it’s predecessor. In case of e.g. Keras it’s done automatically. I know there is a nn.functional api, but looks like using it associated with manual construction of model parameters, which I prefer to avoid too.

So, is there a standard way to write models in more concise (and thus modifiable) form in pytorch?