-
It’s unclear to me under what circumstances
.grad
will have a value -
seems like it is ‘optimized out’ sometimes?
-
How to either request it not be ‘optimized out’ and / or obtain the value via some othre means?
In [36]: a = autograd.Variable(torch.rand(2), requires_grad=True)
In [37]: a.grad
In [38]: b = a * 2
In [39]: c = b * 2
In [40]: c.backward(torch.ones(2))
In [41]: c.grad
In [42]: b.grad
In [43]: a.grad
Out[43]:
Variable containing:
4
4
[torch.FloatTensor of size 2]In [44]: b.requires_grad
Out[44]: TrueIn [45]: c.requires_grad
Out[45]: True
So the rules are:
- Only user created
Variable
s will have a.grad
field (sob
andc
in your example will never have one). - When the
Variable
is created, its.grad
is None. - When you call
.backward()
on a graph that contains thisVariable
, it’s.grad
field will become aVariable
containing it’s gradients and never go back toNone
ever.
Ok fair enough. So, basically, one way of thinking about this is, at least one of .creator
and .grad
are guaranteed to be None? No way that both .creator and .grad can be simultaneously non-None?
1 Like
Exactly.
Also FYI, .creator
has been deprecated for .grad_fn
.
Awesome thanks! And good info on .grad_fn. And by the way, I kind of hated the .creator
name, and I think .grad_fn
is … hmmm… moderately better . But … grad_fn implies it is the fucntion to clculate the gradient, which I’m not sure is the case? Anyway, off-topic for this thread really…