About update generator when training GAN

I am the beginner of Pytorch. And now I want to train a GAN. I have checked many different implement of GAN. I found that when update the discriminator will use the detach function to avoid update the generator. But why when update the generator not use the detach? Example: https://github.com/eriklindernoren/PyTorch-GAN/blob/master/implementations/context_encoder/context_encoder.py

When you update the discriminator with the fake sample, you are training the discriminator to detect this fake sample as a fake sample. The calculated loss should not create gradients for the generator, as these gradients would update the generator in such a way that the next generated fake samples would be easier to detect as such.

.detach() will detach the fake output and will stop Autograd from computing the gradients at this point.

On the other hand, when you are training the generator, you are using the fake output of the generator and pass the “real targets” to the discriminator in order to calculate the gradients for the generator.
The update step will thus update the generator in such a way that the next generated fake output should be more likely to be classified as “real” by the discriminator.