# Specify retain_graph=True when calling backward the first time

Hi all, My code doesn’t work unless I specify retain_graph=True eventhough I am calling backward once every iteration. Thanks in advance

``````    depth_net_encoder.eval()
depth_net_decoder.eval()
while True  :
dummy=torch.zeros(tgt_img_var.shape).to(device)
for u in range(dummy.shape[0]):
ex1=ex2=0
dummy[u,:,ry_list[u]-sz2+ex1:ry_list[u]+sz2+ex1, rx_list[u]-sz2+ex2:rx_list[u]+sz2+ex2]=F.sigmoid(noise)
depth_var = depth_net_decoder(enc_var)
loss_data=0.0
for i in range(1):
o_g=o_g_x+o_g_y
g=g_x+g_y
if i==0:
p_O,p_f=o_g,g

loss=loss_data*100#+loss_reg
loss.backward()
noise=torch.tensor(noiset.cpu().detach().numpy())
loss_scalar = loss.item()
print(loss_scalar)`````````

Hi,

What is the error you’re seeing?

Also you should not use `.data` and `Variable`s anymore.
You can replace `noise=Variable(torch.tensor(img), requires_grad=True)` by `noise=torch.tensor(img, requires_grad=True)` for example.

My issue is eventhough I am calling backward once every iteration,it asks me to set retain_graph=True . What am I doing wrong?

It is usually that the graph is reused.
So either that you do some differentiable computation outside the loop, or that you re-use results from one iteration to the next.

I would usually try to get a repro as small as possible. That helps identify the potential faulty Tensors as mentioned above.
If that doesn’t help, you can use something like torchviz to print the graph and see where the one from the second iteration is attached to the one in the first.