Hidden layers in MLP - Question

Hi all,

I am trying to validate my understanding of hidden layers :). A NN consists of (3) types of layers as of input, hidden and output. Input and output layers shouldn’t be considered as hidden layers. Is that true?

Secondly, if we have the following network strucutre (coming from this post)


class Net(torch.nn.Module):
def init(self, num_inputs, num_hidden, num_outputs):
super(Net, self).init()
self.num_inputs = num_inputs
self.num_hidden = num_hidden
self.num_outputs = num_outputs

    self.lin1 = nn.Linear(num_inputs, num_hidden)
    self.lin2 = nn.Linear(num_hidden, num_hidden)
    self.lin3 = nn.Linear(num_hidden, num_hidden)
    self.lin4 = nn.Linear(num_hidden, num_outputs)
    self.ReLU = nn.ReLU()

def forward(self, x):
    x = x.view(x.size(0),-1)
    out = self.lin1(x)
    out = self.ReLU(out)
    out = self.lin2(out)
    out = self.ReLU(out)
    out = self.lin3(out)
    out = self.ReLU(out)

    return self.lin4(out)


The self.lin1 is an explicit input, but also a hidden layer as it takes hidden nodes
The self.lin2 and self.lin3 are hidden layers
The self.lin4 is output layer → it isn’t a hidden layer

in total, we have three hidden layers. Is that true?

Lastly, the x.view is to flatten the input; should I use it this way or swap its input?

Thank you,

self.lin1 is also a hidden layer. The input layer is represented by your input vector x.

For example, if you represent basic Logistic Regression with N features as a Neural Network (well, having 1 neuron), then your network has an input layer of size N and an output layer of size 1; there are no hidden layers.

Thanks Chris,

so in total my network has 3 layers. right?

Second, commenting on your logstic regresion example, you said that the input layer will hae Size N and output will have size 1; so far there are no hidden layers in here. But, you mentioned that we have 1 nueron - I assume you are referring to the output layer nueron. right?