nn.Sequential modules question

In a typical neural network, you have a first layer of input neurons, then a layer of connections to the next layer (synapses) with trainable weight parameters, a layer of neurons with an activation function, followed by another layer of connections with trainable weights etc… But often the synapses are considered part of the neuron itself.

My question is, does for example torch.nn.ReLU() include this synapse layer? Or is it just the neuron proper (activation function), with this synapse layer (connection weights) separated in for example torch.nn.Linear()?

Will this network be missing synapses between the hidden layers (no trainable weights)?:

net1 = torch.nn.Sequential(
    torch.nn.Linear(In_Dimension, Hidden_Dimension),
    torch.nn.ReLU(),
    torch.nn.ReLU(),
    torch.nn.ReLU(),
    torch.nn.Linear(Hidden_Dimension, Out_Dimension),
)

In contrast with this:

net2 = torch.nn.Sequential(
    torch.nn.Linear(In_Dimension, Hidden_Dimension),
    torch.nn.ReLU(),
    torch.nn.Linear(Hidden_Dimension, Hidden_Dimension),
    torch.nn.ReLU(),
    torch.nn.Linear(Hidden_Dimension, Hidden_Dimension),
    torch.nn.ReLU(),
    torch.nn.Linear(Hidden_Dimension, Out_Dimension),
)

No, it doesn’t and so net1 is bogus and net2 is what you want.
And, if I may add, I have seen the activation seen attached to the synapses in your terminology (e.g. keras does that) but the other way round looks unusual to me.

Best regards

Thomas

1 Like