Backprop through Jacobian

I would like to construct my loss function so that it is a function of the jacobian as follows.

model = nn.Sequential(
nn.Linear(1, 1),
opt = torch.optim.SGD(model.parameters(), lr = 0.001)
z = torch.tensor([2.])
det = torch.autograd.functional.jacobian(model, z)
loss = (torch.abs(det))**2

I would appreciate any suggestions on how I could get this to work. Thanks


You just need to pass the create_graph=True flag to the call to jacobian and it will work.

1 Like