I have a very simple piece of code which puzzles me (using Python 3.5.3 and PyTorch version 0.2.0_3, no CUDA)
As far as I understand, in order to run backward() on a variable
again (after already running it once), it is necessary to reset the
leaf gradients to zero first. But even when I do this, PyTorch will still complain
in the following example code:
“RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.”
from torch.autograd import Variable as V
x = V(torch.ones(2,2), requires_grad=True)
y = 3*x*x # but with y=3*x it would work!!!
This happens when I calculate
y=3*x*x but it does NOT happen when I calculate
How can I reset my gradients so that I can run backward a second time in my case? Is there a different, better way to make this work?