I have a function f(x) which includes an approximation with Taylor expansion, e.g. f(x,a)=g(a)+g’(a)(x-a) where g’(x)=dg(x)/dx is computed with torch.autograd. Now I would like to differentiate it in “a”, i.e. df(x,a)/da. To simplify even more the function let’s say that f(a)=g’(a):
return torch.autograd.functional.jacobian(g, a)
If I visualize the computational graph, it seems that I lose the gradient when computing the Jacobian.
In particular, using these lines I get an empty graph:
a = torch.ones(1, requires_grad=True)
Now, my question is: how can I implement this computation?