Detach variable from computed dynamic graph

Consider a simple GAN model as shown here https://github.com/eriklindernoren/PyTorch-GAN/blob/master/implementations/gan/gan.py

L144 and L157 have a common computation: discriminator(gen_imgs). They only differ in that one detaches the generator from the computational graph. However, the forward pass of the discriminator is still computed twice.

Is it possible to compute discriminator(gen_imgs) only once, but detach gen_imgs from the graph after computation?

E.g something like

d_out = discriminator(gen_imgs)
g_loss = adversarial_loss(d_out, valid)
(...)
fake_loss = adversarial_loss(d_out.detach(gen_imgs), fake)

Why not just use d_out.detach() for the second one? It is a Tensor that has no gradient history.

If I call d_out.detach(), then the gradient wouldn’t flow back to the discriminator right?

You’re right it won’t flow back to the discriminator.

So there isn’t a way to achieve what I asked?

Isn’t that what you asked?
You want it to flow back to discriminator but not above?

Yes, I want it to flow back to the discriminator. What you proposed will prevent the gradient to flow back to the discriminator.

Ho ok. So no this is not possible to do.You will need to forward through the discriminator again.

I see. That’s a pity. Thank you for your responses!