Problem of backpropogation in RNN

Hi,
I am dealing with the problem of “Trying to backward through the graph a second time.”
I think the following two codes are the same, but actually they are not. What’s the difference?

code 1

import torch
x = torch.tensor(0.1)
w = torch.tensor(0.1,requires_grad = True)
y=x*w
z=y*w
y.backward()
z.backward()
#c = y+z
#c.backward()

code2

import torch
x = torch.tensor(0.1)
w = torch.tensor(0.1,requires_grad = True)
y=x*w
z=y*w
#y.backward()
#z.backward()
c = y+z
c.backward()

What is the reason that the first writing style will cause “Trying to backward through the graph a second time.” ?

Thanks.

The backward call will free the intermediate tensors, which were needed for the gradient computation, so save memory. If you need to call backward multiple times, you would need to set retain_graph=True in the backward call:

x = torch.tensor(0.1)
w = torch.tensor(0.1,requires_grad = True)
y=x*w
z=y*w
y.backward(retain_graph=True)
z.backward(retain_graph=True)
c = y+z
c.backward()