Unexpected hook behavior with 3D tensor and inplace operation

Somehow when you combine both 3D input and inplace operation on the tensor, the backward hook seems to not trigger. I fail to understand the reason why this is happening, especially since it seems that 2D input seems okay.

import torch
h = 4
layer = torch.nn.Linear(h, h)
 
def CaseNot3D():
    inp = torch.randn((4, 4))
    out = layer(inp)
    out.register_hook(lambda grad_output: print('Not 3D Case'))
    out += inp
    out = out.sum()
    out.backward()
 
def CaseNotInplace():
    inp = torch.randn((4, 4, 4))
    out = layer(inp)
    out.register_hook(lambda grad_output: print('Not Inplace Case'))
    out = out + inp
    out = out.sum()
    out.backward()
 
def CaseBroken():
    inp = torch.randn((4, 4, 4))
    out = layer(inp)
    out.register_hook(lambda grad_output: print('Broken Case'))
    out += inp
    out = out.sum()
    out.backward()
 
 
CaseNot3D()
#PRINTS "Not 3D Case"
CaseNotInplace()
#PRINTS "Not Inplace Case"
CaseBroken()
#DOES NOT PRINT "Broken Case"

I haven’t seen use cases where .register_hook is called on a non-leaf variable, so unsure if that’s supported. @albanD would know and can correct me.

It is ~expected I’m afraid.
The output here is a view into the result of the linear op. So when inplace happens, this particular view is not used anymore to compute the final result. And so no gradient is computed and the hook is not fired.
In the 2D case, there are no views, and so the output itself is in the graph and used. So the hook fires.
I’m not sure what’s the best move here tbh, could you open an issue on GitHub about this?