Multivariate auto grad

I am implementing PINN on Pytorch (GitHub - maziarraissi/PINNs: Physics Informed Deep Learning: Data-driven Solutions and Discovery of Nonlinear Partial Differential Equations, written in TF). The gist is that I have a prediction loss as well as the loss from gradients. Suppose my input is x_1, x_2, x_3 and my output is y_1, y_2, then dy_1/d_x1, dy1/dx_2, ..., dy_2/dx_3 is needed to compute a different kind of loss.

Right now, my options are

  1. autograd.grad(y[1], x, allowed_unused=True, retain_graph=True) retain_graph because I am going to need autograd.grad(y[2], x, ...) too. This is univariate derivation and doesn’t seem efficient, but it works.
  2. To make it a vector derivation, I need to make autograd.grad(y, x) works, which I don’t know how, and still retain the graph for loss.backward() later to actually change the weights of the network.

How do I implement 2, or I have no other choice than 1. Most PINN Pytorch implementation I found use 1, but for performance concerns, I’d like to explore 2.

Thank you.

Hi Magnus!

What you’ve asked for is the Jacobian of the function that maps x to y.
Consider using pytorch’s jacobian() (older and more established) or jacrev()
(newer and maybe faster, but I’ve never really used it).

Best.

K. Frank