How to use Hooks to obtain layer outputs

Beginner question: I was trying to use PyTorch Hook to get the layer output of pretrained model. I’ve tried two approaches both with some issues:

method 1:

net = EfficientNet.from_pretrained('efficientnet-b7')

visualisation = {}

def hook_fn(m, i, o):
  visualisation[m] = o 

def get_all_layers(net):
  for name, layer in net._modules.items():
    #If it is a sequential, don't register a hook on it
    # but recursively register hook on all it's module children
    if isinstance(layer, nn.Sequential):
      pass
    else:
      # it's a non sequential. Register a hook
      layer.register_forward_hook(hook_fn)

get_all_layers(net)

  
out = net(torch.randn(2,3,224,224))

# Just to check whether we got all layers
visualisation.keys()

This is from the tutorial. However, there’re two issues:

  1. the result only has 8 keys/items - was expecting it to have a lot more layers.
  2. EfficientNet seems to require batch input, but say if I want to store the last but second layer’s output for 10 different images respectively in a list or array, how can I do so?

method 2:
I was trying to understand how to use the Hook class to extract a last but second layer output from a bunch of images without saving every layer, and if I’m passing input to a pre-trained model, do I still need a backward step?

class Hook():
    def __init__(self, module, backward=False):
        if backward==False:
            self.hook = module.register_forward_hook(self.hook_fn)
        else:
            self.hook = module.register_backward_hook(self.hook_fn)
    def hook_fn(self, module, input, output):
        self.input = input
        self.output = output
    def close(self):
        self.hook.remove()

Thanks in advance.

Hi,

the result only has 8 keys/items - was expecting it to have a lot more layers.

First of all, I wouldn’t use the ._modules private attribute but the .children() public API.
As you can see in the doc, it only gives the direct submodules for the current one. And so if you want to get all of them, you will need to call .children() on each submodule you see as well!

EfficientNet seems to require batch input, but say if I want to store the last but second layer’s output for 10 different images respectively in a list or array, how can I do so?

I’m not what you mean by that sentence.
You can access any value you want there, so just get the samples you want from the batch?

Thank you so much for your answer! I wasn’t familiar with the difference between modules vs children - will try children instead

for the next part, I mean if only passing in one image, the Net won’t work and will return ’ [Error: Expected more than 1 value per channel when training]’

I guess this is because you have a batchnorm or similar normalization layer in there that can only work in training mode when you pass in more than 1 sample?

if I run for name, layer in efficientNet.children() I will get ‘TypeError: cannot unpack non-iterable Conv2dStaticSamePadding object’, and if I run children().items() instead I get ‘AttributeError: ‘generator’ object has no attribute ‘items’’. Still puzzled by how to get an output layer from efficientNet or how to refer to it by its name.

You can check the doc. This function returns just the module. So you cannot do name, mod in XXX.children(). As you can see in the doc, if you want names, you can use .named_children().