Hi, I am currently trying to differentiate a function that contains Legendre functions. I define these recursively in a matrix. I wrote a simple code for computing the geometric series that reproduces the error:
import torch as t
def powers_of_x(N,X):
Y = t.ones((N,))
for I in range(1,N):
Y[I] = Y[I-1]*X
return Y
X = t.tensor([2.0], requires_grad=True)
SERIES = powers_of_x(9,X).sum()
SERIES.backward()
X.grad
I guess torch.autograd doesn’t like the fact that I’m overwriting the ones vector, i.e. constant variables. How do I surpass this? I would greatly appreciate any help.
I understand. I think torch is complaining about the index assignment…
This version might be slower but works and bases the next value upon the previous one:
import torch as t
def powers_of_x(N,X):
Y = [t.tensor([1])]
for _ in range(1,N):
Y.append(Y[-1] * X)
return t.stack(Y)
X = t.tensor([2.0], requires_grad=True)
SERIES = powers_of_x(9,X).sum()
SERIES.backward()
X.grad
Another option is to just clone the previous item in the assignment:
import torch as t
def powers_of_x(N,X):
Y = t.ones(N)
for i in range(1,N):
Y[i] = Y[i-1].clone()*X
return Y
X = t.tensor([2.0], requires_grad=True)
SERIES = powers_of_x(9,X).sum()
SERIES.backward()
X.grad
Note that the stack solution can be quite a bit more efficient once you have substantial computation and perhaps a batch of X to process at the same time.
(And if you have elementwise computation, this is probably not good anyways…)