# Question about using autograd function to compute derivative

Hi,
Hope everyone is staying safe. I realize this may be a really stupid question. I am trying to use autograd.grad to compute derivative of some function I have defined. For simplicity, let us use a simple function f(x) = x^2. Here is my code:

import numpy as np
import torch

x = np.array([1, 2, 3 , 4])
y = x*x
x = torch.from_numpy(x).float()
y = torch.from_numpy(y).float()

#Now I want to compute the derivative of y with respect to x
dy = grad(y, x, grad_outputs = torch.ones_like(x),
create_graph=True, retain_graph=True,
only_inputs=True,
allow_unused=True
)[0]

But the output I get is (None,).
I am very confused. Any help will be highly appreciated. Thanks a lot in advance.

Hi Arijit!

The short answer is that you arenâ€™t using pytorch to calculate your
function. (You are using python and numpy.) So pytorch knows
nothing about the relationship between `y` and `x`, and canâ€™t calculate
the derivative (gradient) for you.

Clear your mind of numpy â€“ work with pytorch tensors, instead. Only
`import numpy` if there is something you need to do that you canâ€™t do
without numpy.

`x` doesnâ€™t have `requires_grad` set yet (e.g. `x.requires_grad_(True)`),
so autograd wonâ€™t track the gradient of `y` with respect to `x`. (Of course,
`x` isnâ€™t even a pytorch tensor yet.)

At this point, the cow is already out of the barn. Youâ€™ve already
calculated `x*x`, and there is no way for pytorch to go back and
figure out what you did before you set `x.requires_grad_(True)`.

It would be well worth taking a look at pytorchâ€™s autograd tutorial.

Good luck.

K. Frank

1 Like

Thank you so much. I figured it out.