Second order gradient optimization

Hi, I am trying to optimize for the objective that includes gradients dot product:

optimizer = optim(model.parameters())
regular_loss = criteria(model(batch), target)
loss = regular_loss - regular_loss.grad(batch[i]) * regular_loss.grad(batch[j])

where regular_loss.grad(batch[i]) is gradient of regular_loss w.r.t sample i in batch.

The problem is, when I call loss.backward(), since the regular_loss.grad(batch[i]) doesn’t have requires_grad, so it is meaningless in optimizer (or its grads is 0).

How can I optimize for my desired objective? Thanks.