Adversarial Training: Why not reuse the generators output

I have a question regarding training GANs and whether it is possible to avoid generating two sets of fake data at each batch (one for training the generator and the other for training the discriminator).

In most examples I see the following training procedure for training GANs:

  1. Train Discriminator
    • Train on fake: use generators output
    • Train on real
    • Calculate loss (binary classification or whatever)
  2. Train Generator
    • Generate (again) fake data
    • Calculate loss, using the discriminator

Here is such an example https://github.com/pytorch/examples/blob/master/dcgan/main.py#L195

My question is why not:

Am I missing something? In the first case, arent we simply wasting computation? Is there any benefit in doing things like this?