Hi,
Hope everyone is staying safe. I realize this may be a really stupid question. I am trying to use autograd.grad to compute derivative of some function I have defined. For simplicity, let us use a simple function f(x) = x^2. Here is my code:

import numpy as np
import torch
from torch.autograd import grad

x = np.array([1, 2, 3 , 4])
y = x*x
x = torch.from_numpy(x).float()
y = torch.from_numpy(y).float()
x.requires_grad_(True)
y.requires_grad_(True)

#Now I want to compute the derivative of y with respect to x
dy = grad(y, x, grad_outputs = torch.ones_like(x),
create_graph=True, retain_graph=True,
only_inputs=True,
allow_unused=True
)[0]

But the output I get is (None,).
I am very confused. Any help will be highly appreciated. Thanks a lot in advance.

The short answer is that you arenâ€™t using pytorch to calculate your
function. (You are using python and numpy.) So pytorch knows
nothing about the relationship between y and x, and canâ€™t calculate
the derivative (gradient) for you.

Clear your mind of numpy â€“ work with pytorch tensors, instead. Only import numpy if there is something you need to do that you canâ€™t do
without numpy.

x doesnâ€™t have requires_grad set yet (e.g. x.requires_grad_(True)),
so autograd wonâ€™t track the gradient of y with respect to x. (Of course, x isnâ€™t even a pytorch tensor yet.)

At this point, the cow is already out of the barn. Youâ€™ve already
calculated x*x, and there is no way for pytorch to go back and
figure out what you did before you set x.requires_grad_(True).

It would be well worth taking a look at pytorchâ€™s autograd tutorial.