Accessing layers from a ModuleList via Hooks

I’m using hooks for the first time, and have followed this tutorial for getting forward and backwards hooks for layers of a network. When I try to extend it for use with an arbitrary number of layers - via the use of a ModuleList to contain the layers in my NN model class - I get a “list index out of range” error if I try to select a specific layer from the ModuleList.

Is there a best practice for selecting one layer out of a ModuleList to create a hook for it? Or, more generally, how does (if possible) one get a ModuleList out of a ._module.items() call?

Beyond the code in that tutorial, my NN class is roughly:

class aModel(nn.Module):
        def __init__(self,**kwargs):
                self.model = nn.ModuleList()
                self.conv_layers = kwargs["conv_layer_count"]
                self.det_conv_layers = kwargs["det_conv_layer_count"]
                self.lin_in = kwargs["det_lin_in"]
                for i in range(0,self.conv_layers):
                        self.model.append(nn.Conv2d(in_channels = kwargs["conv_channels"][i], out_channels = kwargs["conv_channels"][i+1], kernel_size = kwargs["conv_kernel_sizes"][i]))
                for i in range(0,self.conv_layers):
                        self.model.append(nn.ConvTranspose2d(in_channels = kwargs["conv_channels_backwards"][i],out_channels = kwargs["conv_channels_backwards"][i+1],kernel_size = kwargs["conv_kernel_sizes_backwards"][i]))
                for i in range(0,self.det_conv_layers):
                        self.model.append(nn.Conv2d(in_channels = kwargs["det_conv_channels"][i],out_channels = kwargs["det_conv_channels"][i+1],kernel_size = kwargs["det_kernel_sizes"][i]))
                self.model.append(nn.Linear(in_features = kwargs["det_lin_in"],out_features = kwargs["det_lin_out"]))

        def forward(self,features):
                for i in range(0,self.conv_layers*3 - 1):
                        features = self.model[i](features)
                        features = func.relu(features)
                features = torch.reshape(features(-1,self.lin_in))
                features = self.model[-1](features)
                features = func.relu(features)
                return features

and my code to create the hooks is

        captainHook = None
        index = 0
        print("Items = " +str(list(model._modules.items())))
        print("Layer 0 = "+str(list(model._modules.items())[1][0]))
        hookF = [Hook(layer[1]) for layer in list(model._modules.items())]
        hookB = [Hook(layer[1],backward=True) for layer in list(model._modules.items())]

        for hook in hookF:
                if index == 2*conv_layers - 1:
                        captainHook = hook

and the output is:

[('model', ModuleList(
  (0): Conv2d(3, 4, kernel_size=(3, 3), stride=(1, 1))
  (1): Conv2d(4, 8, kernel_size=(3, 3), stride=(1, 1))
  (2): Conv2d(8, 4, kernel_size=(3, 3), stride=(1, 1))
  (3): ConvTranspose2d(4, 8, kernel_size=(3, 3), stride=(1, 1))
  (4): ConvTranspose2d(8, 4, kernel_size=(3, 3), stride=(1, 1))
  (5): ConvTranspose2d(4, 3, kernel_size=(3, 3), stride=(1, 1))
  (6): Conv2d(3, 4, kernel_size=(3, 3), stride=(1, 1))
  (7): Conv2d(4, 8, kernel_size=(3, 3), stride=(1, 1))
  (8): Conv2d(8, 4, kernel_size=(3, 3), stride=(1, 1))
  (9): Linear(in_features=1936, out_features=4, bias=True)
Traceback (most recent call last):
  File "", line 307, in <module>
  File "", line 230, in train
    print("Layer 0 = "+str(list(model._modules.items())[1][0]))
IndexError: list index out of range

Before anyone asks, yes, I do have a good reason for convolving, inverting, and then reconvolving. As far as I know, that shouldn’t be affecting this issue.

I think your list comprehension is returning the nn.ModuleList completely, not its internal layers, and thus raises the indexing error.
Here is a small code snippet, which works:

class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        self.modlist = nn.ModuleList()
        for _ in range(5):
            self.modlist.append(nn.Linear(10, 10))
    def forward(self, x):
        for m in self.modlist:
            x = m(x)
        return x

model = MyModel()
x = torch.randn(1, 10)
out = model(x)

[m.register_forward_hook(lambda m, input, output: print(output.shape)) for m in model.modlist]

out = model(x)

If we want to use the layer hook output value, how could we do that in this lambda syntax?

You could replace the print(output.shape) with your function in the lambda call.