Cleanest way to get grads of grads

Hello,
I would need to get the gradient of gradients and I wanted to know what’s the safest way to achieve this.
Basically what I want to do is to optimize the parameters of a model to generate outputs that are in a relative minimum with respect to the parameters themselves. In order to do so, I was planning to optimize a loss that takes as input the grad and the grad of the grad (so minimizing something like -∇∇f / |∇f| ).
Thanks in advance for your help.