I am trying to validate my understanding of hidden layers :). A NN consists of (3) types of layers as of input, hidden and output. Input and output layers shouldn’t be considered as hidden layers. Is that true?
Secondly, if we have the following network strucutre (coming from this post)
def init(self, num_inputs, num_hidden, num_outputs):
self.num_inputs = num_inputs
self.num_hidden = num_hidden
self.num_outputs = num_outputs
self.lin1 = nn.Linear(num_inputs, num_hidden) self.lin2 = nn.Linear(num_hidden, num_hidden) self.lin3 = nn.Linear(num_hidden, num_hidden) self.lin4 = nn.Linear(num_hidden, num_outputs) self.ReLU = nn.ReLU() def forward(self, x): x = x.view(x.size(0),-1) out = self.lin1(x) out = self.ReLU(out) out = self.lin2(out) out = self.ReLU(out) out = self.lin3(out) out = self.ReLU(out) return self.lin4(out)
The self.lin1 is an explicit input, but also a hidden layer as it takes hidden nodes
The self.lin2 and self.lin3 are hidden layers
The self.lin4 is output layer → it isn’t a hidden layer
in total, we have three hidden layers. Is that true?
Lastly, the x.view is to flatten the input; should I use it this way or swap its input?