How to access modules by name

Hi,
I’m writing a function that computes the sparsity of the weight matrices of the following fully connected network:

class FCN(nn.Module):
    def __init__(self):
        super(FCN, self).__init__()

        
        self.fc1 = nn.Linear(input_dim, hidden_dim)
        self.relu1 = nn.ReLU()
        self.fc2 = nn.Linear(hidden_dim, hidden_dim)
        self.relu2 = nn.ReLU()
        self.fc3 = nn.Linear(hidden_dim, hidden_dim)
        self.relu3 = nn.ReLU()
        self.fc4 = nn.Linear(hidden_dim, output_dim)
    
    def forward(self, x):

        out = self.fc1(x)
        out = self.relu1(out)
        out = self.fc2(out)
        out = self.relu2(out)
        out = self.fc3(out)
        out = self.relu3(out)
        out = self.fc4(out)

        return out

The function I have written is the following:

def print_layer_sparsity(model):
    for name,module in model.named_modules():
        if 'fc' in name:
            zeros = 100. * float(torch.sum(model.name.weight == 0))
            tot = float(model.name.weight.nelement())
            print("Sparsity in {}.weight: {:.2f}%".format(name, zeros/tot))

But it gives me the following error:

torch.nn.modules.module.ModuleAttributeError: ‘FCN’ object has no attribute ‘name’

It works fine when I manually enter the name of the layers (e.g.,

(model.fc1.weight == 0)
(model.fc2.weight == 0)
(model.fc3.weight == 0) …

but I’d like to make it independent from the network. In other words, I’d like to adapt my function in a way that, given any sparse network, it prints the sparsity of every layer. Any suggestions?

Thanks!!

Have you tried using model.children()?
Afterward, you can pick the child (= layer) and do layer.parameters().

Naive way:

for layer in model.children():
    weights = list(layer.parameters())[0] # because it's a generator
    # do stuff
1 Like

Do you know if layer.parameters() take care of masked weights? For example, model.parameters() does not

What do you mean by taking care of the masked weights?

Sorry, I thought I had specified it. The weights are masked using a pruning function (such as prune.l1_unstructured(module, name="bias", amount=3).

here: How to access to a layer by module name? - #6 by klory