DCGAN Convergence Issues

Hi, I am implementing this DCGAN tutorial: DCGAN Tutorial — PyTorch Tutorials 2.2.0+cu121 documentation on two different datasets but my generator loss is not converging.

I am training the model for 75 epochs and I am calculating the average loss for each epoch by :

average loss per epoch = sum of losses for epoch / total number of iterations in that epoch

My final graph looks like this:

D_and_G_loss_training

How can I fix the generator loss behaviour? I tried changing the learning rate to different values but it didnt work. Thanks for the help!

If anyone is facing the same issue, I was able to solve it using the MultiStepLR method.
In the training section i call the MultiStepLR at epoch 25 and 45 right after the optimizer.step() functions

the scheduler can be defined as

scheduler_G = MultiStepLR(optimizerG, milestones = [25, 45], gamma = gamma) scheduler_D = MultiStepLR(optimizerD, milestones = [25,45], gamma = gamma)

and it is called as

scheduler_D.step()
scheduler_G.step()