Consider a simple GAN model as shown here https://github.com/eriklindernoren/PyTorch-GAN/blob/master/implementations/gan/gan.py
L144 and L157 have a common computation: discriminator(gen_imgs)
. They only differ in that one detaches the generator from the computational graph. However, the forward pass of the discriminator is still computed twice.
Is it possible to compute discriminator(gen_imgs)
only once, but detach gen_imgs
from the graph after computation?
E.g something like
d_out = discriminator(gen_imgs)
g_loss = adversarial_loss(d_out, valid)
(...)
fake_loss = adversarial_loss(d_out.detach(gen_imgs), fake)
albanD
(Alban D)
January 31, 2020, 7:34pm
2
Why not just use d_out.detach()
for the second one? It is a Tensor that has no gradient history.
If I call d_out.detach()
, then the gradient wouldn’t flow back to the discriminator right?
albanD
(Alban D)
February 2, 2020, 9:53pm
4
You’re right it won’t flow back to the discriminator.
So there isn’t a way to achieve what I asked?
albanD
(Alban D)
February 3, 2020, 4:08pm
6
Isn’t that what you asked?
You want it to flow back to discriminator but not above?
Yes, I want it to flow back to the discriminator. What you proposed will prevent the gradient to flow back to the discriminator.
albanD
(Alban D)
February 4, 2020, 3:48pm
8
Ho ok. So no this is not possible to do.You will need to forward through the discriminator
again.
I see. That’s a pity. Thank you for your responses!