Question about using autograd function to compute derivative

Hi,
Hope everyone is staying safe. I realize this may be a really stupid question. I am trying to use autograd.grad to compute derivative of some function I have defined. For simplicity, let us use a simple function f(x) = x^2. Here is my code:

import numpy as np
import torch
from torch.autograd import grad

x = np.array([1, 2, 3 , 4])
y = x*x
x = torch.from_numpy(x).float()
y = torch.from_numpy(y).float()
x.requires_grad_(True)
y.requires_grad_(True)

#Now I want to compute the derivative of y with respect to x
dy = grad(y, x, grad_outputs = torch.ones_like(x),
create_graph=True, retain_graph=True,
only_inputs=True,
allow_unused=True
)[0]

But the output I get is (None,).
I am very confused. Any help will be highly appreciated. Thanks a lot in advance.

Hi Arijit!

The short answer is that you aren’t using pytorch to calculate your
function. (You are using python and numpy.) So pytorch knows
nothing about the relationship between y and x, and can’t calculate
the derivative (gradient) for you.

Clear your mind of numpy – work with pytorch tensors, instead. Only
import numpy if there is something you need to do that you can’t do
without numpy.

x doesn’t have requires_grad set yet (e.g. x.requires_grad_(True)),
so autograd won’t track the gradient of y with respect to x. (Of course,
x isn’t even a pytorch tensor yet.)

At this point, the cow is already out of the barn. You’ve already
calculated x*x, and there is no way for pytorch to go back and
figure out what you did before you set x.requires_grad_(True).

It would be well worth taking a look at pytorch’s autograd tutorial.

Good luck.

K. Frank

1 Like

Thank you so much. I figured it out.