I’d like to do something conceptually simple but not sure how to implement it in practice:
f(x).backward()
grad_x = x.grad
#stuff
f(y) = f(x + u(g)).backward()
grad_g = g.grad
#stuff
I’d like to be able to do this in an on-the-fly manner i.e, without maintaining two separate graphs for f(x) and f(y), as these will be user-specific. Ideally I’d like to be able to do an in-place operation on x: y = x+ u(g) and then re-pass this to f(y). Is this possible in PyTorch?
Parameter objects are considered graph leaves, so backward in this case wont propagate further to model.gamma because where its used is “below” a leaf node (i.e. model.alpha).
so what I changed was calling params['alpha'].backward() directly, that way it seems to able to reach model.gamma.
I also changed how you compute alpha, since your original code: model.alpha = torch.nn.Parameter(torch.exp(model.gamma), requires_grad=True) only created a new parameter object and therefore didn’t add to the computation graph.
output2 is a scalar variable (e.g a loss function), therefore you can call .backwards() on it.
alpha/beta/gamma/whatever could be a vector-valued or matrix-valued variable, which means you can’t call .backwards() on it unless you loop through all elements in the tensor, which is not feasible.