How to get grad w.r.t. a non-leaf node(input)

class Mass(nn.Module):
def init(self):
super().init()
self.weights = nn.Parameter(torch.tensor(2e-2))

def jacobian(self, q1,q2):
f1 = torch.tensor([[self.weightstorch.cos(q1)],[self.weightstorch.sin(q1)]], requires_grad=True)
f1.sum().backward()
Jv1 = q1.grad
return Jv1

input1 = torch.tensor([[0.5]], requires_grad=True)
input2 = torch.tensor([[0.]], requires_grad=True)
m = Mass()
print(m.jacobian(input1, input2))

Hi everyone. I am new to Pytorch. I am trying to compute q1.grad but everytime it shows the None as answer. Any help would be appreciated. Thanks.

You are detaching the input tensors from the computation graph by creating a new tensor:

f1 = torch.tensor([[self.weightstorch.cos(q1)],[self.weightstorch.sin(q1)]], requires_grad=True)

Replace the torch.tensor creation with e.g. torch.cat or torch.stack and it should work.

@ptrblck Thanks a lot for the exact solution.

f_x_1 = ac*cos(q1)

f_y_1 = ac*sin(q1)

f1 = Matrix([f_x_1 , f_y_1])

f_x_2 = acos(q1) + accos(q1 +q2)

f_y_2 = asin(q1) + acsin(q1 +q2)

f2 = Matrix([f_x_2 , f_y_2])

Jv1 = Matrix([[diff(f1, q1), diff(f1, q2)]])

M = m*(Jv1.T)*(Jv1)

Jv1.subs(q1, 0.5) ----->> [
−0.00958851077208406 0
0.0175516512378075 0 ]

I wrote this code in Python using Sympy module. Could you please let me know what’s the best way to write similar code in Pytorch? I basically wanted to calculate Jacobian which is a 2 by 2 matrix w.r.t inputs (q1,q2). I am really having a hard time here. Thanks.

Did you check torch.autograd.functional.jacobian already to see if this would work for your use case?
You should be able to replace the other method directly with PyTorch operations, e.g. asin with torch.asin.

1 Like

Thanks. It’s my mistake, but the expression is a*sin(q1), so a is constant here. although I have written a different code. I tried

fx1 = self.weightstorch.cos(q1) + 0q2

torch.autograd.grad(fx1,(q1,q2),retain_graph=True,allow_unused=True)

where a = self.weights. (I need to predict the a)

And I believe that it will work well. Will I face any difficulty training those self.weights bcoz I used autograd.grad? I heard that it would cause some problems with retaining the graph or something. I am not sure.

I’m not sure which issues you are referring to and I’m not aware of any regarding retaining the graph.

Anyway, thank you for your time, sir.