Question about gradient

I have implemented here the code:

def calc_grad_hessian(func, x):
x = torch.tensor(x, dtype=torch.float, requires_grad=True)
y = func(x)
grad = torch.autograd.grad(y, x, create_graph=True)[0]
hessian = torch.zeros(x.shape[0], x.shape[0])
for i in range(x.shape[0]):
hessian[:,i] = torch.autograd.grad(grad[i], x, retain_graph=True)[0]
return y.detach().numpy(), grad.detach().numpy(), hessian.detach().numpy()

Even I have decleared that the requires_grad of x is true, it always says there is an error in
grad = torch.autograd.grad(y, x, create_graph=True)[0]:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Can anyone tell me why?

Thank you in advance.

Best Regards,

It’s impossible to tell why without looking at what func does. You probably want to bisect your func to see what operations you are doing that break the autograd graph. Common ways to break the graph are constructing a new tensor with torch.ones_like or torch.tensor(...) or converting to numpy and back.