Is there any method to support Jacobian-matrix products? For example, let’s say I have a scalar loss and parameters of dimension p. My “v” vector in JVP is a matrix of dimension “batch_size x p”. I want to compute the JVP for each batch so that the output is “batch_size x 1”. I was wondering If there any ways to do this without using a loop.
Not yet, but its in the works. Check out the issue tracking forward-mode AD here [feature request] Forward-mode automatic differentiation · Issue #10223 · pytorch/pytorch · GitHub.