I am using LSGAN for super-resolution. The loss is GAN loss + L1 loss. After around 4 epochs, both the generator and discriminator seem to achieve equilibrium, and the visual quality steadily increases. But after about 10 epochs, the loss oscillates, and the visual quality of the images decreases, even for the training samples. What’s the possible reasons for this problem? Longer training time is not expected to bring any side effect since GAN is unlikely to overfit (https://www.quora.com/How-can-we-apply-early-stopping-for-Generative-Adversarial-Network-GAN-to-prevent-overfitting) .
Below shows the plot for the training loss. Thanks for help!