Is equivalence of torch.autograd.grad(f, x) supported in cpp version?

Hi,

Based on what I see in https://pytorch.org/cppdocs/, autograd in cpp now only works when we call backward() on leaf node right? Do we have something like torch.autograd.grad(f, x) which returns the grad of a function f w.r.t. x?

The reason I need this is that when computing second order derivatives, the grad is accumulated on the node if I call backward() twice on original tensor and grad tensor. I will need to store the first-order derivative as a temporary variable and subtract it from the result.

Any help/suggestion would be appreciated. Thanks!

1 Like

Hi does anyone have ideas on this? Thanks in advance!

1 Like