MLP pytorch implementation

Hi everyone,

doing a bit of research on the forum and looking at various codes I got a doubt about implementing an MLP in pytorch.
In particular, I have often seen two implementations for an MLP. The first is simply:

from torch import nn

MLP = nn.Linear(
            in_features=... ,

the second one also includes the activation function and the dropout:

from torch import nn

class MLP(nn.Module):
    def __init__(self, in_features, out_features, activation='ReLU', dropout=0.3):
        super(MLP, self).__init__()
        self.layers = nn.Sequential()
        act_fn = getattr(nn, activation)
        self.layers.add_module('fc_{}'.format(i),nn.Linear(in_features, out_features))
        self.layers.add_module('{}_{}'.format(activation, i), act_fn())
        self.layers.add_module('dropout_{}'.format(i), nn.Dropout(dropout))
        in_features = out_features

    def forward(self, x):
        return self.layers(x)

my question is whether these two implementations, under the hood, are the same or not. By default, does pytorch’s Linear layer already have its own (default) activation function and a (default) dropout?

If they weren’t the same, as I suspect, what is the most faithful implementation for an MLP?

Thank you all!

The linear layer only does a linear transformation and does not include a non-linearity inbuilt. So, the second one is actually creating a MLP in true sense.

1 Like

OK thank you very much!