I am a beginner of pytorch, and wish to use the autograd function of the system. I wish to ask the following problem.

Now I wish to implement a list of variables that contain another variable with derivatives. I wish the list could maintain the derivative. For instance, here is the code:

import numpy as np

import torch

import numpy.linalg as linalg

Now I define a variable phi with derivative:

phi =(torch.randn(2))

phi.requires_grad = True

And I wish to define a function such that

def Afunction(phi):

return torch.tensor([torch.cos(phi[0]),torch.sin(phi[1])])

I find when I define it in the above way, the derivative of A is lost.

print(phi)

tensor([ 0.2366, -0.5896], requires_grad=True)

print(Afunction(phi))

tensor([ 0.9721, -0.5561])

There might be some other ways to make the input matrix better. However, I have to do it in this way since the actual list I have, beyond this example, [[…]], is printed from some other codes, and it is very long. I have no other ways to transform it to python.

Is there any resolution to this problem?