How to backward the derivative?

how to backward if my loss is something about the derivative of the net output?
for example:
my input is ‘u’, then my net output can be seemed as f(u).
and my target is target = a*( f ’ (u)|u ) + b, " f ’ (u)|u" means the derivative of ’ f ’ w.r.t ‘u’
so my loss may will be ‘| ground truth - target |’.
Is this kind of loss can be backwarded in pytorch?
I tried to backward something about the derivative, but it can’t work.
here is my code :

import torch
from torch.autograd import Variable
x = Variable(torch.ones(2, 2), requires_grad=True)
y = x + 2
out = y
out.backward(torch.ones(2, 2))
k = 2*x.grad
k.backward()

the error is :
k.backward()
RuntimeError: element 0 of variables tuple is volatile

it seems like pytorch will not save the parameters when it calculates the derivative.
Has there any method to solve this kind of questions?
Thank you.

1 Like

Hi,

For this to work you need to pass the create_graph=True option to the first .backward() to let it know that you need to be able to call .backward() on the grad itself.
Also if you need the gradient wrt a specific variable, you can use the autograd.grad(outputs, inputs) function to get the derivative of the output(s) wrt the input(s). For example out.backward() is equivalent to autograd.grad(out, model.parameters()).

2 Likes

It works !!!
Sorry for I haven’t read the document completely.
Thank you !

Thank you, I have been looking for an example on how to use autograd api. IMO, a small example of using autograd.grad api should be added to pytorch doc. Furthermore, examples of higher-order derivatives would help as well. Say, for example Double Backpropagation, especially since loss.backward(create_graph=True) no longer works, causing memory leak due to some changes in C++ backend(weak to strong pointers), issue page.

1 Like