I have a variable
a and a bunch of functions
f_k(a), so I create tensor to hold the results of all these functions, each time when a function is computed, I also need to compute the gradient for this function, so here is what I did,
import torch as th k = 2 a = th.tensor(2.0, dtype=th.float32, requires_grad=True) # variable b = th.zeros(k, dtype=th.float32) # tensor to hold results of the k functions c = th.tensor(0.0, dtype=th.float32) ### this for-loop is just a dummy demo when I don't use a tensor to hold results for i in range(k): c = a*i g = th.autograd.grad(c, a) # works fine print(g) ### this is what I wanted to do for i in range(k): b[i] = a*i g = th.autograd.grad(b[i], a) # FAIL when i=1, won't create graph again for i=1? print(g)
My code doesn’t work, and I expected that each loop will created a new graph for
b[i], and that’s valid for me to do a
grad operation, but it turns out to be false, why??