I am a beginner of pytorch, and wish to use the autograd function of the system. I wish to ask the following problem.

Now I wish to implement a list of variables that contain another variable with derivatives. I wish the list could maintain the derivative. For instance, here is the code:

import numpy as np
import torch
import numpy.linalg as linalg

Now I define a variable phi with derivative:
phi =(torch.randn(2))

And I wish to define a function such that
def Afunction(phi):

I find when I define it in the above way, the derivative of A is lost.

print(phi)
print(Afunction(phi))
tensor([ 0.9721, -0.5561])

There might be some other ways to make the input matrix better. However, I have to do it in this way since the actual list I have, beyond this example, [[…]], is printed from some other codes, and it is very long. I have no other ways to transform it to python.

Is there any resolution to this problem?

Hi junyuphybies!

I am not an expert on autograd, but (if I understand your question)
I don’t believe that there is a convenient way of doing what you want.

The problem, as I see it, is that the `grad_fn` of a tensor is attached
to the tensor as a whole, and is not attached to individual elements
of the tensor.

The proper way to accomplish you goal would be to write a custom
that has its own `backward()` method that knows
how to compute the derivative of its `forward()` method.

See Extending torch.autograd for the details.

However, in the special case that you really do want `cos()` and `sin()`
(and they’re not just for the purposes of a simple example), you can
use pytorch’s (recent) support for complex autograd, together with
the fact that the (complex) exponential function has `cos()` and `sin()`
inside of it.

Here is a script that implements this approach:

``````import torch
print (torch.__version__)

def Afunction (phi):   # your version

def Bfunction (phi):   # version using complex exp() with autograd
z = torch.exp (1.0j * phi)
w = torch.tensor ([1.0 + 0.0j, 0.0 - 1.0j]) * z
f = w.real
return f

phi = torch.tensor ([ 0.2366, -0.5896], requires_grad = True)

print (phi)
print (Afunction (phi))
print (Bfunction (phi))

Bfunction (phi).backward (gradient = torch.tensor ([1.0, 1.0]))

print (-torch.sin (phi), torch.cos (phi))   # check phi.grad
``````

And here is its output:

``````1.9.0.dev20210412