Computing batch Jacobian efficiently

I’m trying to compute Jacobian (and its inverse) of the output of an intermediate layer (block1) with respect to the input to the first layer. The code looks like :

def getInverseJacobian(net2, x):
    # define jacobian matrix
    # x has shape (n_batches X dim of input vector)
    # Take one input point from x and forward it through 1st block 

    jac = torch.zeros(size=(x.shape[1],x.shape[1]))

    y = net2.block1(x)

    for i in range(x.shape[1]):
        jac[i,:] = torch.autograd.grad(y[0][i],x, create_graph=True)[0]



    # Getting inverse of jacobian using Penrose pseudo-inverse
    jac_inverse = torch.pinverse(jac)

    if torch.isnan(jac_inverse).any():
        print('Nan encountered in Jacobian !')
        sys.exit(0)
    
    return jac_inverse

This works well for single data in a batch. How do I convert it to make a Jacobian for complete batch without using loop. This function will be called several times in the training and loop would not be ideal.
Any suggestions? Am I calculating Jacobian efficiently in the first place? [It is accurate though]

1 Like

Hi,

Yes this looks like the right way to do it.
FYI we now have a built in function that does the same thing: https://pytorch.org/docs/stable/autograd.html#torch.autograd.functional.jacobian

There is no better way to compute the jacobian yet I’m afraid. But we’re working on it.

1 Like

Hi, do you mean the in-build function also works on one input point?

By the way, may I ask for an example of https://pytorch.org/docs/stable/autograd.html#torch.autograd.functional.jacobian with respect to the parameters of a network please? I have no clue since the first parameter of jacobian is a function.

Not sure what you mean by “one input point” could you clarify?

For nn.Module, you can check this answer: Get gradient and Jacobian wrt the parameters - #3 by albanD

Hi,

Recently, I met the same problem and tried to do the batch_jacobian operation with for-loops. Although it works, but the runtime is too long. Fianlly, I implemented the batch_jacobian in another way, it is more efficient and the runtime is close to the tf.GradientTape.

def batch_jacobian(func, x, create_graph=False):
  # x in shape (Batch, Length)
  def _func_sum(x):
    return func(x).sum(dim=0)
  return autograd.functional.jacobian(_func_sum, x, create_graph=create_graph).permute(1,0,2)
3 Likes

Note that if you’re using the latest version of pytorch, there is a vectorize=True flag for functional.jacobian() that might speed things up in some cases :slight_smile: