So I made a previous last week on how I was unsuccessful at implementing a model that was implemented in TensorFlow.
I decided to copy and paste a working model on Github and see if it works.
Due to limitation of memory on my GPU, I changed batch_size from 128 to 32. I’m not sure if this will have a significant impact.
Here is what the losses looked like:
Generated images (fixed seed) before training:
Generated images (fixed seed) at epoch 1:
Generated images (fixed seed) at epoch 9:
Generated images (fixed seed) at epoch 10:
Generated images (fixed seed) at epoch 20:
Is there any reason why it is behaving like this? Why does it do so well for 10 epochs then just decides to generate garbage?
Thank you for your help.