Equivalent data preprocessing methods get entirely different results

Hi. I changed the transforms code part of DCGAN tutorial(https://github.com/pytorch/tutorials/blob/master/beginner_source/dcgan_faces_tutorial.py) from

dataset = dset.ImageFolder(root=dataroot,
                           transform=transforms.Compose([
                               transforms.Resize(image_size),
                               transforms.CenterCrop(image_size),
                               transforms.ToTensor(),
                               transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
                           ]))

to

dataset = dset.ImageFolder(root=dataroot,
                           transform=transforms.Compose([
                               transforms.ToTensor(),
                               transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
                           ]))

by removing the resize and Centercrop.

Then the results becomes entirely different. After removing the resize and centercrop, the DCGAN just cannot get good results regardless of different hyperparameters. The discriminator’s loss drops dramatically to 0.

But to my knowledge, since the celeA image size is 64 by 64, removing resize(64) and Centercrop() won’t make any difference. Could anyone explain it? I really appreciate it.

Did you make sure all image tensors have the same width and height after removing the two mentioned transformations?
Also, is this effect reproducible, i.e. how many times did you run the code?

I thought celeA’s images are 64 by 64. That is the reason. Thanks