I was trying to understand the autograd mechanism in more depth. To test my understanding, I tried to write the following code which I expected to produce an error (i.e., Trying to backward through the graph a second time).
b = torch.Tensor([0.5])
for i in range(5):
b.data.zero_().add_(0.5)
b = b + a
c = b*a
c.backward()
Apparently, it should report an error when c.backward()
is called for the second time in the for loop, because the history of b has been freed, however, nothing happens.
But when I tried to change b + a to b * a as follows,
b = torch.Tensor([0.5])
for i in range(5):
b.data.zero_().add_(0.5)
b = b * a
c = b*a
c.backward()
It did report the error I was expecting.
This looks pretty weird to me. I don’t understand why there is no error evoked for the former case, and why it makes a difference to change from + to *.