What are the parameters and use of self.fc_mu() and self.fc_sigma()?

I am looking through some example code, and am not new to Machine Learning, but I saw:

self.fc_mu = nn.Linear(hidden_size, hidden_size)
self.fc_sigma = nn.Linear(hidden_size, hidden_size)

I understand that self.fc1(),self.fc2()… are referring to the fully connected layers one and two which are both Linear in my example, but what is self.fc_mu referring to, and what does it return? Also, what are the parameters of self.fc_mu(), and self.fc_sigma() which I saw used many times? Thank you in advance, I am just struggling to figure this out.

self.fc_mu and self.fc_sigma are just the attribute names for both linear layers.
Their meaning depends on the context. In this case they might be used to apply the “reparametrization trick”.

In the context that I am currently in, this is the code:

class Discriminator(nn.Module):
def init(self, input_size=2, output_size=1, hidden_size=128):
self.fc1 = nn.Linear(input_size, hidden_size)
self.fc2 = nn.Linear(hidden_size, hidden_size)
self.fc_mu = nn.Linear(hidden_size, hidden_size)
self.fc_sigma = nn.Linear(hidden_size, hidden_size)
self.fc3 = nn.Linear(hidden_size, output_size)

    for m in self.modules():
        if isinstance(m, nn.Linear):
            nn.init.constant_(m.bias, 0.0)
def forward(self, x):
    x = F.leaky_relu(self.fc1(x), negative_slope=2e-1)
    x = F.leaky_relu(self.fc2(x), negative_slope=2e-1)
    mu = self.fc_mu(x)
    sigma = torch.sigmoid(self.fc_sigma(x))
    x = torch.sigmoid(F.leaky_relu(self.fc3(mu + sigma * torch.randn_like(sigma)), negative_slope=2e-1))
    return x, mu, sigma

What would this entail? And what is the reparametrization trick entailing?