Linear layer can not register backward pre hook

Edit: for anyone reading in the future, module backward pre hook is introduced in the 2.0.0 release but is buggy in that version, so if you want to use module backward pre hooks you should be using 2.0.1

FYI there’s a bug in backward pre hook. You should either update to a nightly in a couple days, or apply the patch in Understanding gradient calculation with backward_pre_hooks - #3 by soulitzer if you’d like to use module backward pre hooks.

Another workaround is to apply a hook to the output of your module instead.

import torch
import torch.nn as nn

a = torch.ones(2, requires_grad=True)

model = nn.Linear(2, 2)

def fn(grad_output):
    return grad_output * 2

def fn2(module, grad_inputs, grad_output):
    # The modification is still observed
    print(grad_inputs, grad_output)
    return (grad_inputs[0] / 2,)

# No longer using full backward pre hooks
# model.register_full_backward_pre_hook(fn)
model.register_full_backward_hook(fn2)

out = model(a)
# Instead, register a hook to the output of your module
# Unlike module hooks, register AFTER the forward runs
out.register_hook(fn)

out.sum().backward()
print(a.grad)