Backward() twice

a=nn.Parameter(torch.tensor([[2.]]))
b=a*2
c=b.sum()
c.backward()
print(c)
c.backward()#error

i know this will cause error,

Trying to backward through the graph a second time

but i have a question,the forward history still exists:

c:
tensor(4., grad_fn=<SumBackward0>)

why can not i backward twice?is there something secret in pytorch i do not konw?why pytorch not just compute gradient according to the forward history?

The intermediate activation tensors were already freed, which is causing the error.

if so,why pytorch still retain the grad_fn?after all,we can not backward anymore.

You can backward through parts which do not need the intermediate activations to calculate the gradient. Removing the .grad_fn would be unnecessarily strict as seen here:

x = torch.randn(1, 1, requires_grad=True)
y = x * 2
w = torch.randn(1, 1, requires_grad=True)
z = y + w

z.mean().backward()
print(x.grad, w.grad)
# > tensor([[2.]]) tensor([[1.]])

# z.mean().backward()
# > RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

z.mean().backward(inputs=w)
print(x.grad, w.grad)
# > tensor([[2.]]) tensor([[2.]])