Get gradient and Jacobian wrt the parameters

Hi,

It is tricky to use these functions with nn.Modules because they compute the Jacobian of the input wrt to the output. While in your case, you want the jacobian wrt to the parameters.

You have two options here:

  • You can use a similar approach to this code that extract the weights and put them back into the Module so that you can write a function that actually takes the Parameters as input. Then you can call into functions like torch.autograd.functional.jacobian() with this.
  • Write by hand a function that reconstructs the jacobian for an nn.Module similar to the one you linked bu instead of giving x to autograd.grad, you want to give model.parameters(). To get the gradients wrt to the params and not the input.
2 Likes