How to trace gradients through the network

Hi. I’m trying to use a HyperNetwork, i.e. a network that generates weights for another network. However, I’m finding that the HyperNetwork weights are not being updated when running the opt.step() . Is there a way to trace the gradient graph, back to the input, to make sure the HyperNetwork weights are on there? It would be great if there was a way to visualise that but just being able to trace the gradients would be extremely helpful.

i.e.
a = f1(input)
b = f2(a)
c = f3(b)
loss = MSE(c, truth)
loss.backward()
From here, I’d like to be able to trace gradients back to make sure parameters of f3 have grad graph edge to params of f2 which in turn have grad graph edge to params of f1.

To check if a input is the graph you can use torch.autograd.grad(output, inputs=(input,))
If the input is not part of the graph, then autograd would raise an error.

Thanks for the reply. I’m afraid that doesn’t solve my problem. I need to be able to trace what is in the gradient graph of output. torch.autograd.grad would raise an error if an input was not part of the gradient but no way to see what is in the graph.