Is there any way of accessing the output from intermediate layers?

Let’s say I have a function that receives:

  • a network module, containing multiple layers
  • the output of that network module, from passing in a batch
  • nothing else

… is there any way of somehow accessing the intermediate results in the intervening layers, from passing the batch through those layers, to get the output?

I’m wondering if maybe autograd stores this information somewhere/somehow, so we can access it ourselves?

Hey!

To reduce memory usage we try very hard not to save all of these no. So you cannot guarantee that they are actually saved.
You could use global nn.Module forward hooks to force saving some of the results during the forward pass so that you can access them later.

1 Like

Ah, thats an interesting idea. I can assume that I do have access to the network before the batch was passed through it, so I could add hooks at that point. interesting.

Ho I just realized that our doc binding for global hooks is broken :confused:
But you can use Module.register_module_forward_hook() to register a hook on all the modules that will run. So that you don’t have to hook every single Module in there.
You can find the doc inline for now here.

1 Like

Note that, since I forget each time, to use this function:

from torch import nn
nn.modules.module.register_module_forward_hook(hook)
1 Like