What's the difference between tensor.register_hook and grad_fn.register_hook?

I noticed that both tensor and grad_fn have a register_hook method, and they all seem to work after the grad is computed, are they the same?

a = torch.tensor([2.], requires_grad=True)
b = a * 3
b.backward()
grad_acc = b.grad_fn.next_functions[0][0]


# what's the difference between these two operations?
a.register_hook(...)
grad_acc.register_hook(...)
1 Like

I think the hooks on Tensor run before the backward pass, while the hooks on grad_fn run after the gradients are generated.