How to remove Neurons from the hidden layer on the basis of their distinctiveness or contribution to the final output

I have already created a Neural network with uses sigmoid() as activation function and now I want to check the contribution of each neuron and remove the neurons on the basis of that. One way that I have encountered in a research paper is that, for each hidden unit we construct a vector of the same dimensionality as the number of patterns in the training set, each component of the vector corresponding to the output activation of the unit. This vector represents the functionality of the hidden unit in (input) pattern space.
I am pretty new to neural networks, so apologies in advance if someone finds this question very basic.

That is how my code looks at the moment

class Net(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
      super(Net, self).__init__()
      self.fc1 = nn.Linear(input_size, hidden_size)
      self.sigmoid = nn.Sigmoid()
      self.fc4 = nn.Linear(hidden_size, num_classes)

def forward(self, x):
    out = self.sigmoid(out)
    print (out)
    new = torch.sum(out, dim=0)
    out = self.fc4(out)

    return out