Vectorized Jacobian Vector Products (in 1.8)

Is it now possible with Pytorch 1.8 to compute vectorized Jacobian vector products? I know in the past it wasn’t (e.g., Jacobian functional API batch-respecting Jacobian), but I am wondering if that has changed with Pytorch 1.8 through some implementation of vmap.

In particular, I am referring to the following two use cases:

  1. Calculating the Jacobian vector products Jv_i for i=1,…, N, where J is the Jacobian of a function f at a point x and v_i are a set of vectors.

I am wondering if this is how the new vectorized torch.autograd.functional.jacobian/hessian are implemented (with $v_i$ being the standard basis), and if so is there a way to do this for any general set of vectors $v_i$? Related in particular to Add `vectorize` flag to torch.autograd.functional.{jacobian, hessian} by zou3519 · Pull Request #50915 · pytorch/pytorch · GitHub

  1. Calculating the Jacobian vector products J_i v_i for i = 1, …, N, where J_i is the Jacobian of a function f at a point x_i (the difference vs. 1 is now also calculating the Jacobian over a batch of different inputs x_i)

Thanks!

1 Like

Hi,

  1. We actually do that by doing vJ with a batch of v so you won’t be able to re-use it here.
  2. That would be “element-wise” jacobian vector product. There are two cases here:
    a. if your function is already working with “batch” with no interaction between them. Then you can compute a regular Jv and it will give the same result.
    b. If there are interaction between the batches in the forward and you don’t want to see that in the Jacobian, you’ll have to do a for-loop around the Jv computation for now I’m afraid.

Hi Alban,

Thanks! Could you give me a bit more information about 1)? A vJ product with a batch of v works just as well for me.

In that case, yes this is how it is implemented using the new vmap prototype.
You can see the code here: pytorch/functional.py at e91aeb0470f329556f599ac9feb21b0cff31e11c · pytorch/pytorch · GitHub

1 Like