How to unite Linear, Batchnorm, ReLU, dropout?

Hello everyone,

what is a valid way to construct a Hierarchical NN Architecture of subcomponents.
The goal is accomplish something like this:

class FeedForwardLayer( nn.Module ):
     def __init__(self):
         super().__init__()

         self.lin = nn.Linear( ... ) 
         self.bn = nn.BatchNorm1d( ... )
         self.act = nn.ReLU( )
         self.do = nn.dropout( ... )

    def forward( self , x ):
        return self.do(   self.act(    self.bn(    self.lin(  x )  )  )  )

class MultiLayerNetwork( nn.Module  ):
    def __init__( self ):
         super().__init__()
         self.layer_1 = FeedForwardLayer()
         ...
         self.layer_n = FeedForwardLayer()

    def forward(self, x ):
         hidden_1 = self.layer_1( x )
         hidden_2 = self.layer_2( hidden_1 )
         ...
         return hidden_n

For the purpose of simplicity there are some pseudo code elements,
no need to correct them.
My question is regarding the structure.

Thanks in advance

Your code looks alright, so I’m unsure if you are seeing any issues with it or just would like to have some feedback?

1 Like