Loss function which uses the derivative of the net w.r.t. input

Hey there,

i want to implement something probably related to :

but i am not quite sure how to do it.

I want to have a loss with multiple parts, one of which uses the derivative of the net with respect to an input value. The background is that i want to impose characteristics for the linearisations of the net at certain values of x.

Small examples with notation for clarification:

  • nn(x) : R β†’ R is our network
  • x is the input
  • y the true value
  • x_1, x_2 are some values in R
  • d_nn(x*) is the derivative of nn(x) w.r.t x at x*

and i want a loss which could for example look like :

a = nn(x_2) - nn(x_1) + d_nn(x_1)*x - d_nn(x_2)*x
b = d_nn(x_1) - d_nn(x2)

loss = (a/b - y )**2

I am still new to pytorch and all help is highly appreciated!

To get the derivative of your network (net) with respect to an input (x_1) use torch.autograd.grad() e.g.

dnet_by_dx1 = grad(net(x1), x1, create_graph=True)[0]

where x1 needs to be a torch.Variable.

2 Likes