How to monitor grad in self-made function (consisted of formal torch.Tensor) during backward() process?

Thank you for replying!
I thought this register_backward_hook function is for visualizing nn package, which can’t be used for just a variable (whose requires_grad is True).

I solved this problem by using register_hook function from this discussion.

I implemented like following

def save_grad(name):
    def hook(grad):
        print(name, ":", grad)
    return hook

a = torch.ones((3),requires_grad=True)
b = 3*a
c = b*a
loss = c.sum()
b.register_hook(save_grad('b'))
loss.backward()

I checked NaN using your Nan detection code for each printed grad value.