How can I extract features from all the layer from loaded GNN model?

Recently, I git cloned a GNN network and am trying to get the output from all the layers (as I am not sure what each layer are named).

There is a file in the GitHub repo that does the prediction by loading a pre-trained model with an example input i.e. pretrained.py and I am trying to add some additional code in it to extract outputs.

I have tried adding the following code just after the model.eval() in the above-mentioned file that was mentioned in other threads (which was used to extract intermediate layer output from loaded CNN models):

activation = {}
def get_activation(name):
    def hook(model, input, output):
        activation[name] = output.detach()
    return hook

for name, layer in model.named_modules():
    layer.register_forward_hook(get_activation(name))

But I am getting the following error:

Traceback (most recent call last):
  File "alignn/pretrained.py", line 248, in <module>
    out_data = get_prediction(
  File "alignn/pretrained.py", line 221, in get_prediction
    model([g.to(device), lg.to(device)])
  File "/home/vgf3011/.virtualenvs/alignn/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl
    result = forward_call(*input, **kwargs)
  File "/home/vgf3011/vishu/alignn/alignn/models/alignn.py", line 290, in forward
    x, y, z = alignn_layer(g, lg, x, y, z)
  File "/home/vgf3011/.virtualenvs/alignn/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 1120, in _call_impl
    result = forward_call(*input, **kwargs)
  File "/home/vgf3011/vishu/alignn/alignn/models/alignn.py", line 163, in forward
    x, m = self.node_update(g, x, y)
  File "/home/vgf3011/.virtualenvs/alignn/lib64/python3.8/site-packages/torch/nn/modules/module.py", line 1123, in _call_impl
    hook_result = hook(self, input, result)
  File "alignn/pretrained.py", line 168, in hook
    activation[name] = output.detach()
AttributeError: 'tuple' object has no attribute 'detach'

Is there anything that I am doing wrong or do I have to be more specific on things to extract output from each of the layers?

As given in the error message output is a tuple, so you can’t call detach() on it (it’s a tensor operator) and would need to unwrap the tuple first.

Can you please let me know how to unwrap the tuple?

You could either iterate it or assign the content to separate variables:

a = ((torch.tensor(1), torch.tensor(2)))
print(type(a))
# > <class 'tuple'>

# iterate it
for x in a:
    print(x)
    
# unwrap
x, y = a

Is this something that I have to do at the place where I have tuple as the variable type, like in-between the layers of the GNN architecture?

No, you would have to unwarp the tuple in the get_activation method, as it raises the error here:

activation[name] = output.detach()

When I tried to print the output type by type(output) inside the get_activation function, there seems to be output with type tuple at variable intervals. I am not sure if it is ok to ignore them for now but does it make sense to unwrap wherever you have output as type tuple and then detach it like:

out1, out2 = output
activation[name] = out1.detach()
activation[name] = out2.detach()

This would mean that different layers return a variable number of outputs (sometimes a single output would be returned otherwise multiple outputs).

You should check, if output is indeed a tuple before unwrapping it as otherwise you might slice a tensor or would run into a ValueError. Also, in your current code you are overriding the activation[name] value, so you should consider using different names to store the tuple tensors in the activation dict.

Thank You. Also, is there any way to switch on/off the activations i.e. we get the output of the layer with/without passing through the activation function while using the get_activation function?

activations refers to the layer outputs in this context, not a specific activation function such as nn.ReLU so it’s not possible to “switch it off” without manipulating the return statement in the layer’s forward method.

I am not sure how to frame this correctly, but is the current output that we are getting the output from the layers only or do they pass through functions like ReLU before getting printed out ?

It depends on the model definition and in particular how the forward method is implemented.
In your code snippet you are using:

for name, layer in model.named_modules():
    layer.register_forward_hook(get_activation(name))

to register the forward hook for each module.
If the activation functions (e.g. nn.ReLU()) are defined as modules via self.act1 = nn.ReLU(), a separate forward hook will be used for it. However, if the activation functions are used via the functional API in the forward via x = F.relu(x), then no forward hook will be registered for them. In both cases, if the activation function uses inplace=True it’ll be applied (and visible) in the previous layer’s output.

I would also like to know if it is possible to get the output of a certain variable inside the forward pass such as variable z = x + self.layer1(x)? Can it be incorporated into the above loop or do I have to get it via some other method?

You could return z additionally to the original output in the forward method or e.g. add it to a global list etc.