I am using the following code. I have to use autograd.grad twice. The first time it runs.

The second time. it gives the error “One of the differentiated Tensors appears to not have been used in the graph”. I tried my best to understand what is going wrong here, but could not solve it.

u_x=torch.autograd.grad(u,x,torch.ones(u.shape),retain_graph=True,create_graph=True)[0] #error in the last line
u_xx=torch.autograd.grad(u_x,x,torch.ones(u_x.shape),create_graph=True)[0]

Your function looks to be linear, and the second order grads of that are zero. The issue is that when that is the case, sometimes (most of the time?) autograd will disconnect the graph. Autograd unfortunately don’t distinguish between zero gradients and no gradients flowing (i.e, no graph).

Thanks soulitzer. I agree. and it makes complete sense. I tried using the second(later) code in tensor flow, and it is giving me second gradient. The second code model is linear, but the inputs are not related to the output y. Moreover, i have seen it work(second code) in pytorch as well. I just could not resolve it after so many days.

This example is made out of my work on solving 2d wave equation. The u would be the solution and the three input are its coordinates, x,y, and t(x,y location coordinates and time).