Inplace modification of constants

Hi, I am currently trying to differentiate a function that contains Legendre functions. I define these recursively in a matrix. I wrote a simple code for computing the geometric series that reproduces the error:

import torch as t

def powers_of_x(N,X):
    Y = t.ones((N,))
    for I in range(1,N):
        Y[I] = Y[I-1]*X
    return Y

X = t.tensor([2.0], requires_grad=True)
SERIES = powers_of_x(9,X).sum()

SERIES.backward()
X.grad

I guess torch.autograd doesn’t like the fact that I’m overwriting the ones vector, i.e. constant variables. How do I surpass this? I would greatly appreciate any help.

I think a better implementation of powers_of_x(9,X) could be t.pow(X, t.arange(9)):

Updated code:

import torch as t

X = t.tensor([2.0], requires_grad=True)
SERIES = t.pow(X, t.arange(9)).sum()

SERIES.backward()
X.grad

Output:

tensor([1793.])

Hi, thanks for the reply. What I am trying to figure out, however, is how to use autograd in a recursive function.

I understand. I think torch is complaining about the index assignment…

This version might be slower but works and bases the next value upon the previous one:

import torch as t

def powers_of_x(N,X):
    Y = [t.tensor([1])]
    for _ in range(1,N):
        Y.append(Y[-1] * X)
    return t.stack(Y)

X = t.tensor([2.0], requires_grad=True)
SERIES = powers_of_x(9,X).sum()

SERIES.backward()
X.grad

Output:

tensor([1793.])

I see. I guess this is an option, though unfortunately not a particularly elegant one. Thank you!

Another option is to just clone the previous item in the assignment:

import torch as t

def powers_of_x(N,X):
    Y = t.ones(N)
    for i in range(1,N):
        Y[i] = Y[i-1].clone()*X
    return Y

X = t.tensor([2.0], requires_grad=True)
SERIES = powers_of_x(9,X).sum()

SERIES.backward()
X.grad

output:

tensor([1793.])

Thanks, this is precisely what I was looking for.

Note that the stack solution can be quite a bit more efficient once you have substantial computation and perhaps a batch of X to process at the same time.
(And if you have elementwise computation, this is probably not good anyways…)

Best regards

Thomas

1 Like