How do I set the order of pytorch hooks?

Huggingface with output_attention=true seems to run before all the pytorch hooks affect the output

So I want to make a way to make 2 sets of hooks

  1. the original ablation hooks
  2. The hooks to calculate the attention_weights

Is there a way to set the order of the pytorch hooks since I want to make sure that ablation hooks run first.

So far my ablation hooks are added by
```

hooks =

for layer_idx, head_num in reciever_heads_sorted:

print(f"    Adding hook for head ({layer_idx}, {head_num})")



\# Define ablation hook for this specific head

def create_ablation_hook(target_head_num):

    def ablation_hook(module, input, output):

        attention_output = output\[0\]

        batch_size_tensor, seq_len, hidden_dim = attention_output.shape

        num_heads = module.num_attention_heads

        head_dim = hidden_dim // num_heads

        

        \# Zero out the specific head

        reshaped = attention_output.view(batch_size_tensor, seq_len, num_heads, head_dim)

        reshaped\[:, :, target_head_num, :\] = 0

        modified = reshaped.view(batch_size_tensor, seq_len, hidden_dim)

        

        return (modified,) + output\[1:\]

    return ablation_hook



\# Register hook for this layer

attention_layer = model.model.layers\[layer_idx\].self_attn

hook = attention_layer.register_forward_hook(create_ablation_hook(head_num))

hooks.append((hook, layer_idx, head_num))

```

Hi! Did you try to use prepend=True when registering your module forward hook? As in:

hook = attention_layer.register_forward_hook(create_ablation_hook(head_num), prepend=True)

prepend=True did not seem to work

I added the ablation hooks then the view hooks but, for some reason the impact of the hooks on the output did not show up in the attn_probs.

The impact the ablation hooks had did not seem to appear in the results of the view_hooks

the code is
hooks =

for layer_idx, head_num in reciever_heads_sorted:

print(f"    Adding ablation hook for head ({layer_idx}, {head_num})")



\# Define ablation hook for this specific head

def create_ablation_hook(target_head_num):

    def ablation_hook(module, input, output):

        attention_output = output\[0\]

        batch_size_tensor, seq_len, hidden_dim = attention_output.shape

        num_heads = 32

        head_dim = hidden_dim // num_heads

        

        \# Zero out the specific head

        reshaped = attention_output.view(batch_size_tensor, seq_len, num_heads, head_dim)

        reshaped\[:, :, target_head_num, :\] = 0

        modified = reshaped.view(batch_size_tensor, seq_len, hidden_dim)

        

        return (modified,) + output\[1:\]

    return ablation_hook



\# Register hook for this layer

attention_layer = model.model.layers\[layer_idx\].self_attn

hook = attention_layer.register_forward_hook(create_ablation_hook(head_num), prepend=True)

hooks.append((hook, layer_idx, head_num))

import torch.nn as nn

view_hooks =

hidden_states_dict = {}

dropout = 0.0 # Set to your model’s dropout if needed

for layer_idx, head_num in reciever_heads:

print(f"    Adding view hook for head ({layer_idx}, {head_num})")



\# Define view hook for this specific head

def create_view_hook(target_head_num, layer_idx):

    

    def view_hook(module, input, output):

        attn_output, attn_weights = output



        batch_size, seq_len, hidden_dim = attn_output.shape

        num_heads = 32

        head_dim = hidden_dim // num_heads

        \# Extract only the output for the target head

        attn_output_reshaped = attn_output.view(batch_size, seq_len, num_heads, head_dim)

        target_head_output = attn_output_reshaped\[:, :, target_head_num, :\]  # (batch, seq, head_dim)

        \# Extract only the attention probs for the target head

        target_head_probs = attn_weights\[:, target_head_num, :, :\]  # (batch, seq, seq)




        hidden_states_dict\[(layer_idx, target_head_num)\] = {

            "attn_output": target_head_output.detach().cpu().to(torch.float32),

            "attn_probs": target_head_probs.detach().cpu().to(torch.float32)

    }

        return output



    return view_hook



\# Register hook for this layer

attention_layer = model.model.layers\[layer_idx\].self_attn

hook = attention_layer.register_forward_hook(create_view_hook(head_num, layer_idx))

view_hooks.append((hook, layer_idx, head_num))

I added the hooks before registered them earlier however, they do not seem to affect the ouput and input grads for the view_hook even though the ablation hooks affect the output of the model.

I tested this as output_attention=True is still equal to the view_hooks even if I prepend = True for the ablation hooks.

My current assumption is ablating the attn_output in a previous hook should affect the calcs of attn_weights in the later hooks

I think I confirmed prepend=True fixes the orignal issue of order the pytorch hooks will make a new issue on why the ablation hooks affect on attn_prob does not affect downstream results or update the later tensors attn_probs

Ablation affects the tensor for that layer but, has no impact on other following heads or layers the ablation did not seem to affect their output