# Second autograd.grad gives error- Tensors appears to not have been used in the graph

Hello

I am using the following code. I have to use autograd.grad twice. The first time it runs.

The second time. it gives the error “One of the differentiated Tensors appears to not have been used in the graph”. I tried my best to understand what is going wrong here, but could not solve it.

Help is much appreciated. The code is shown below

thanks
salman
import torch

x=torch.randn(10,3).float()

y=torch.sum(x,1)
y2=torch.flatten(y)

L1=torch.nn.Linear(10, 50) # 3 bbing x, y(depth),and time
L2=torch.nn.Linear(50, 50)
L3=torch.nn.Linear(50, 10)

#Model Output
u=L1(y2)
u=L2(u)
u=L3(u)

u=u.reshape(u.shape,1)

# the first autograd.grad works ok as expected

#error in the last line

Your function looks to be linear, and the second order grads of that are zero. The issue is that when that is the case, sometimes (most of the time?) autograd will disconnect the graph. Autograd unfortunately don’t distinguish between zero gradients and no gradients flowing (i.e, no graph).

Thanks soulitzer. I made it a second derivative function, and it worked

1 Like

Can this also be corrected. This is a simplified example of the Helmohltz equation, shown here:

import torch
import numpy as np

x=torch.randn(10,3).float()

L1=torch.nn.Linear(3, 50)
L2=torch.nn.Linear(50, 50)
L3=torch.nn.Linear(50, 1)

#Model Output
u=L1(x)
u=L2(u)
u=L3(u)

u=u.reshape(u.shape,1)