Can't get gradient from cloned variables

import torch
from torch.autograd import Variable
q = Variable(torch.randn(2),requires_grad=True)
q_prime = q.clone()
x = torch.dot(q,q)
y = torch.dot(q_prime,q_prime)
x.backward()
y.backward()
print(q.grad)
print(q_prime.grad)

Output:

Variable containing:
6.2226
0.3911
[torch.FloatTensor of size 2]
None

Why can’t I get the gradient from the cloned variable?

1 Like

Because clone is also an edge in the computation graph. By default intermediate nodes are not retaining gradient. In your case the gradient is eventually accumulated to q.grad. If you want q_prime to retain gradient, you need to call q_prime.retain_grad()

1 Like

Your answer is really helpful!
Thanks!