I have a dcgan training on a dataset of living room images of 256 * 256 * 3 size. But for some reason, even after 10 epochs, after some uniform grayscale at the beginning, its output looks like this: https://imgur.com/a/ObzqTrc
I guess the view operations will interleave the image pixels, if you try to swap the dimensions with them.
Check which shape each loaded sample has in your ImagesDataset.
If the sample has the shape [height, weight, channels], you should call sample = sample.permute(2, 0, 1) instead of sample = sample.view(3,256,256).
The same applies for the output of your model: