Say that I want to calculate the output of a function and its Jacobian. One way to do it would be to do the following

``````import torch
net = torch.nn.Linear(5, 2)
y = net(x)

``````

however, this means that I’ll have to do two forward passes. It feels like there should be a way to do the following

``````jac = torch.autograd.functional.jacobian(lambda x: y, x)
``````

but then `jac` is all zeros. Is there a way to do this?

The end goal is to use the Jacobian with respect to the network’s activations in a regularization term, but this is the step I’m bothered about.

Have a look at it, it might be useful

Thanks, your solution works fine as long as one makes sure that input and output is 1D. I got an order of magnitude speed-up in the Jacobian calculation with your way.

``````import torch

def jacobian(y, x):
"""Compute dy/dx = dy/dx @ grad_outputs;
for grad_outputs in [1, 0, ..., 0], [0, 1, 0, ..., 0], ...., [0, ..., 0, 1]"""
jac = torch.zeros(y.shape, x.shape)
for i in range(y.shape):