I’m sorry, about the poor subject, but I don’t know how to describe this better. I’m trying to compute some second order derivatives.
If I do this:
import torch as tc
x = tc.tensor(0.0, requires_grad=True)
loss = x**2
g = tc.autograd.grad(loss, x, create_graph=True)[0]
g2 = tc.autograd.grad(g, x)[0]
It runs fine. Yay!
It also works well with, say, a sine:
import torch as tc
x = tc.tensor(0.0, requires_grad=True)
loss = tc.sin(x)
g = tc.autograd.grad(loss, x, create_graph=True)[0]
g2 = tc.autograd.grad(g, x)[0]
Yay!
But for some other functions this fails. For example:
import torch as tc
x = tc.tensor(0.0, requires_grad=True)
loss = x
g = tc.autograd.grad(loss, x, create_graph=True)[0]
g2 = tc.autograd.grad(g, x)[0]
Fails with:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
It also doesn’t work for:
import torch as tc
x = tc.tensor(0.0, requires_grad=True)
loss = x + 1
g = tc.autograd.grad(loss, x, create_graph=True)[0]
g2 = tc.autograd.grad(g, x)[0]
Any ideas?