Curious behavior while getting partial derivatives

Hi all,

I am having a weird issue. Suppose I have some data say x = torch.rand(5,1) and some function f(x), g(x), say $f(x) =x^2$ and $g(x) = x^3$. I know how to take derivative (via autograd) of both the functions separately w.r.t. to x. But now suppose the data is given in the form (x,f(x)) and (x,g(x)) as described below.

x = torch.rand(5,1).requires_grad_(True)
def f(x):
  return x**2
def g(x): 
 return x**3
data_withf = torch.cat([x, f(x)], dim=1)
data_withg = torch.cat([x, g(x)], dim=1)
full_data = torch.cat([data_withf, data_withg], dim=0)
# Now I want to take the derivative of the 2nd coordinate of full_data w.r.t. x (i.e. first coordinate)
y = full_data[:,1].view(-1,1).requires_grad_(True)
input_x =  full_data[:,0].view(-1,1).requires_grad_(True)
grad(y, input_x, grad_outputs = torch.ones_like(input_x), 
                    create_graph=True, retain_graph=True, 
                   only_inputs=True,
                   allow_unused=True
                  )[0]

But this returns nothing. The reason to do this convoluted complicated way of doing is the following: I have some input x’s and a bunch of given functions f and I want to train a neural network using x and the outputs of the functions f and their derivatives. Is there some easy to fix my issue? If it helps, I am happy to write down how I want to train a Neural Network to use the x’s, f(x)‘s and f’(x)s.

Thank you.