How to access input/output activations of a layer given its parameters names?

I was wondering if it is possible to get the input and output activations of a layer given its parameters names.

For example, assume a weight tensor is called module.fc3.weights. Can I access the inputs and outputs of the layer which contains the said weight tensor?

I only need to do this once for a pertained neural network and therefore, good performance is not a concern.

You could use forward hooks and use the parameter name to register them.
Let me know, if that would work for you.

Thank you for the answer. This should solve the problem. I just need to find a method to iterate all layers within the neural network and add this hook automatically.

I have two follow up questions:

  • If the activation function is defined in a container like sequential, e.g. nn.ReLU(), does the output consider such activation function or not? In other words, is the output before or after activation function is applied?
  • Would I be able to combine multiple layers into one? For example, if a layer is followed by batch normalization, can I get the output after batch normalization is applied?

This approach might work to register hooks for all modules.

  1. If you are using out of place activations, the output will non-linearity will be applied on the input and will return the output you could clone into your dict. However, if you are using inplace=True, note that also the input will be manipulated in-place.

  2. Yes, you can pass any module (containing other submodules) and register the hook to this outer output.