Hidden layers in MLP

Hi! I am struggling to understand the basic idea of “hidden layers” in an MLP.

Given the following network:

class Net(torch.nn.Module):
    def __init__(self, num_inputs, num_hidden, num_outputs):
        super(Net, self).__init__()
        self.num_inputs = num_inputs
        self.num_hidden = num_hidden
        self.num_outputs = num_outputs
        
        self.lin = nn.Linear(num_inputs, num_hidden)
        self.lin2 = nn.Linear(num_hidden, num_hidden)
        self.lin3 = nn.Linear(num_hidden, num_hidden)
        self.lin4 = nn.Linear(num_hidden, num_outputs)
        self.ReLU = nn.ReLU()

    def forward(self, x):
        out = self.lin(x)
        out = self.ReLU(out)
        out = self.lin2(out)
        out = self.ReLU(out)
        out = self.lin3(out)
        out = self.ReLU(out)

        return self.lin4(out)

Do we have 2 hidden layers here, or 3?

How would a network with 2 hidden layers look/work?