DCGAN outputs ordered garbage

I have a dcgan training on a dataset of living room images of 256 * 256 * 3 size. But for some reason, even after 10 epochs, after some uniform grayscale at the beginning, its output looks like this:
https://imgur.com/a/ObzqTrc

Losses (sampled each batch) look like this:
Losses

Code is available at:
https://github.com/Grubzerusernameisavailable/DCGAN-pytorch

My guess is that something is wrong in my training code(Since i transfered from TF2.0 recently) and not completly sure that it is correct

I guess the view operations will interleave the image pixels, if you try to swap the dimensions with them.
Check which shape each loaded sample has in your ImagesDataset.
If the sample has the shape [height, weight, channels], you should call sample = sample.permute(2, 0, 1) instead of sample = sample.view(3,256,256).
The same applies for the output of your model:

test_image = gen(noise).cpu().detach()[0].view(256,256,3)

Which is most likely wrong, as PyTorch uses the layout [batch_size, channels, height, width].

1 Like