I have a dcgan training on a dataset of living room images of 256 * 256 * 3 size. But for some reason, even after 10 epochs, after some uniform grayscale at the beginning, its output looks like this:
Losses (sampled each batch) look like this:
Code is available at:
My guess is that something is wrong in my training code(Since i transfered from TF2.0 recently) and not completly sure that it is correct
I guess the
view operations will interleave the image pixels, if you try to swap the dimensions with them.
Check which shape each loaded sample has in your
If the sample has the shape
[height, weight, channels], you should call
sample = sample.permute(2, 0, 1) instead of
sample = sample.view(3,256,256).
The same applies for the output of your model:
test_image = gen(noise).cpu().detach().view(256,256,3)
Which is most likely wrong, as PyTorch uses the layout
[batch_size, channels, height, width].