Hi,
I have a question about the second order derivative of with respective to the input. Say,
class Test(nn.Module):
def __init__(self):
super().__init__()
bo_b = False
bo_last = False
self.l1 = nn.Linear(1, 1, bias = bo_b).to(device)
self.l2 = nn.Linear(4, 1, bias = bo_b).to(device)
def forward(self, v):
v = self.l1(v)
v = self.l2(v)
return v
fnn = Test()
xx = [[0], [0.5], [1]]
xx_torch = torch.tensor(xx, requires_grad = True, device = device, dtype=torch.float32)
ge_out = fnn.forward(xx_torch)
df_dx = torch.autograd.grad(ge_out, xx_torch,
grad_outputs = torch.ones( xx_torch.size()), create_graph = True)[0]
df2_dx2 = torch.autograd.grad( df_dx, xx_torch )[0]
But it gives me the error message: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.
Can anyone help me with the problem? Thanks a lot!