Could pytorch print out a list of parameters in a computational graph if the parameters are not in a module? For example, print the list of parameters until d in the following computational graph:
from torch.autograd import Variable
a = Variable(torch.rand(1, 4), requires_grad=True)
b = a**2
c = b*2
d = c.mean()
No such function exist at the moment.
I guess you could traverse the graph using
.next_functions, finding all the
AccumulateGrad Functions and getting their
.variable attribute. This would give you all the tensors in which gradients will be accumulated (possibly 0 valued) if you call backward on
Why do you need such function? Why don’t you already know which tensors are used in your computations?