Is my PyTorch broken?

So I made a previous last week on how I was unsuccessful at implementing a model that was implemented in TensorFlow.

I decided to copy and paste a working model on Github and see if it works.

I used this model GitHub - znxlwm/pytorch-MNIST-CelebA-GAN-DCGAN: Pytorch implementation of Generative Adversarial Networks (GAN) and Deep Convolutional Generative Adversarial Networks (DCGAN) for MNIST and CelebA datasets

Due to limitation of memory on my GPU, I changed batch_size from 128 to 32. I’m not sure if this will have a significant impact.

Here is what the losses looked like:

image

Generated images (fixed seed) before training:
image

Generated images (fixed seed) at epoch 1:
image

Generated images (fixed seed) at epoch 9:
image

Generated images (fixed seed) at epoch 10:
image

Generated images (fixed seed) at epoch 20:
image

Is there any reason why it is behaving like this? Why does it do so well for 10 epochs then just decides to generate garbage?

Thank you for your help.