Changing Neurons during Inference Using Forward Hooks

I want to confirm my understanding of Hooks in PyTorch.

Objective: Changing incoming Neurons to the Convolution layers during inference. For simplicity, assume the neuron values will be halved.

Code (showing hook only):

# Define hooks
class SaveOutput:
    def __init__(self):
        self.outputs = []

    def __call__(self, module, module_in, module_out):
        # halve neuron values
        module_in = module_in * 0.5
        # see the layer outputs
        self.outputs.append(module_out)

    def clear(self):
        self.outputs = []

save_output = SaveOutput()

# register hooks
hook_handles = []

for layer in model.modules():
    # only for the Conv2D
    if isinstance(layer, torch.nn.modules.conv.Conv2d):
        handle = layer.register_forward_hook(save_output)
        hook_handles.append(handle)

Is this okay?

Thank you.

Hi,

If you want to be able to modify the input before it is processed by your module, you need to use a forward pre hook and return the new value for the input.
If you want to change the output, you can use the forward hook and return the new value that should be used from now on.

1 Like

Does that mean, the change would be like so:

def mod_neurons(self, module, module_in):
        # halve neuron values
        module_in = module_in * 0.5

# register hooks
hook_handles = []

for layer in model.modules():
    # only for the Conv2D
    if isinstance(layer, torch.nn.modules.conv.Conv2d):
        handle = layer.register_forward_pre_hook(mod_neurons)
        hook_handles.append(handle)

As I mentioned, doing just module_in = module_in * 0.5 is not going to change what is used in the forward, you need to return the updated value from the hook if you want it to be used return module_in.

1 Like