Do I need to pre-train Discriminator in eval()?

Hi, I am trying to train a GAN-based model with a pre-train discriminator. So, I wonder do I need to put my pre-train discriminator in the evaluation stage or if I do not have to. because the parameter optimizer optimisation is only from generator and I look through example training GAN in PyTorch DCGAN Tutorial — PyTorch Tutorials 1.13.1+cu117 documentation. In this example, they didn’t put discriminator in eval() when updating the generator so is it the same as pre-train?? Thank you

You want the gradients to come from the Discriminator. That’s why for training the Generator(the second discriminator pass), you’ll note they did not .detach() on the fake images. In this way, you can think of the updated Discriminator and Generator behaving as one model and by setting the labels to real, we get the difference between the fake images and what the Discriminator identified as “real” features. So then the Discriminator communicates to the Generator what parts of the images looked fake.

1 Like

SO that means to get the gradient, I should not put the train discriminator in eval, right? Thank you

Sorry, I may have misunderstood your question. eval() will disable any batchnorm or dropout layers and has nothing to do with gradients. The correct way to run inference without gradients is:

with torch.no_grad():
    output=model(input)

With that said, you could use .eval() and that might possibly help speedup the Generator’s training, as it will turn off the batchnorm layers on the Discriminator. But do not use torch.no_grad().

1 Like