Derivative of the Jacobian

Hello! I want to take the derivate of the jacobian using pytorch, but it seems like I am doing the wrong thing. Here is part of my code:

x_1 = np.arange(1,4,1)
x_1 = torch.from_numpy(x_1).reshape(len(x_1),1)
x_1 = x_1.float()
x_2 = np.arange(1,4,1)
x_2 = torch.from_numpy(x_2).reshape(len(x_2),1)
x_2 = x_2.float()
x = torch.cat((x_1,x_2),1)

x.requires_grad = True

w1 = nn.Linear(2, 2, bias=False)

y = w1(x)

def jacobian(inputs, outputs):
    return torch.stack([torch.autograd.grad([outputs[:, i].sum()], [inputs], create_graph=True)[0]
                        for i in range(outputs.size(1))], dim=-1)
jac = jacobian(x,y)

So at this point jac contains the jacobian matrix of my system, which works just fine. But I want to take the derivative of jac with respect to x (basically second order partial derivatives). But when I do this: d_jac = jacobian(x,jac) I get this error: RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior. I am not sure why, as obviously jac is derived from y, so it should still depend on x. Can someone help me? Thank you!

An output might be linear in some of the inputs, then those Jacobians are constant and thus not dependent on the inputs. You could pass the suggested flag and look for zeros.

Best regards

Thomas

Thank you for your reply! I am not sure exactly what do you mean by flags here. However, in my problem I actually want the output to be “as linear as possible” in the inputs. I actually want to enforce (for the actual code, not this simplified version) the partial derivatives of the Jacobian to be as close to zero as possible. But coming back to the example I gave, the function is indeed linear (it is just a linear layer), but my goal for this example is for the variable jac_d to be a matrix containing only zeros i.e. taking the derivative of a constant matrix (jac) should gives only zeros. Basically I just want to get here that the second derivative of a linear function with respect to the given variable is zero. Can I do that? And if not, how can I check (enforce) if the jacobian depends on a given variable? Thank you!

The error message suggests to pass allow_unused=True. How do things work for you when you do that?

I tried it but it gives me a NoneType tensor which I can’t use, it doesn’t give me zeros, which is what I would need.

Well, you’ll need to postprocess that yourself
[(torch.zeros_like(i) if g is none else g) for i, g in zip(inputs, grads)]
or somesuch should do the trick