Specify retain_graph=True not working for RuntimeError: Trying to backward through the graph a second time

Hello, I am trying to train a CycleGAN model in PyTorch. Following is the code snippet for training the two generators alternately:

# Training G_ST and G_TS
    if epoch % 5 == 0:
        optimizer_G_ST.zero_grad()
        L_GAN_ST_forGST = - torch.mean(torch.log(D_T_G_ST_E_x_s_forGST))
        L_cyc_forGST = cycleLoss(G_ST_G_TS_E_x_t, E_x_t)
        L_GAN_ST = L_GAN_ST_forGST + L_cyc_forGST
        L_GAN_ST.backward(retain_graph=True)
        optimizer_G_ST.step()

    if epoch % 5 == 1:
        optimizer_G_TS.zero_grad()
        L_GAN_TS_forGTS = - torch.mean(torch.log(D_S_G_TS_E_x_t_forGTS))
        L_cyc_forGTS = cycleLoss(G_TS_G_ST_E_x_s, E_x_s)            
        L_GAN_TS = L_GAN_TS_forGTS + L_cyc_forGTS
        L_GAN_TS.backward()
        optimizer_G_TS.step()

For the line of code L_GAN_ST.backward(retain_graph=True), I am getting the following error:
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

What is the reason for this error and how do I resolve this? A few answers have suggested using retain_graph=True which I tried but it is not working. Please help.

Which posts suggested this as a solution as it’s generally wrong?

You might be running into this issue.

I was talking about this post.
Also, the discussion in this issue didn’t help. I am still facing the same thing.

Did you read my linked post and did you try to check if your training also wants to use stale gradients?