Deeper DCGAN with AE stabilizer - AEGeAN

Hi All,

Just thought I’d share a project that I finished recently. Uses a very deep DCGAN and parallel training as an autoencoder to generate large images pretty reliably with some interesting results. Seems to avoid mode collapse/recover from it better than a regular gan.

I also have a version that generates at 512x512 which is a lot lighter on memory that I can add if anyone is interested.

If something is broken, post an issue and I’ll try to fix it.

Enjoy!

2 Likes

Thanks for sharing Tyler. This is pretty cool. It reminds me of EBGAN which has an autoencoding GAN formulation.

1 Like

Hi there,

I’m quite new to mytorch and machine learning. Still I tried this net and got nowhere. All it produced were darkish images that became brighter over time (100 epochs) but no useful images were produced. The image here is a downsampled version of the last fake_samples.png

My experiments with the original DCGAN are quite satisfying, but I’m looking for higher resolution output :wink:

I had to make some modifications before being able to run it without errors. First thing I had to change was use the dataset construction from DCGAN:

dataset = transforms.Compose([
           transforms.Resize(image_size),
           transforms.CenterCrop(image_size),
           transforms.ToTensor(),
           transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
           ]))

Using the original construction of dataset resulted in this error:

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1474: UserWarning: Using a target size (torch.Size([4, 3, 1024, 1365])) that is different to the input size (torch.Size([4, 3, 1024, 1024])) is deprecated. Please ensure they have the same size.
  "Please ensure they have the same size.".format(target.size(), input.size()))
Traceback (most recent call last):
  File "/root/code/AEGeAN/AEGeAN_run.py", line 161, in <module>
    reconstruction_loss = criterion(reconstructed, input)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 433, in forward
    reduce=self.reduce)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1477, in binary_cross_entropy
    "!= input nelement ({})".format(target.nelement(), input.nelement()))
ValueError: Target and input must have the same number of elements. target nelement (16773120) != input nelement (12582912)

To get rid of some deprecation warnings I replaced all statements like fake_samples_loss.data[0] with fake_samples_loss.item() in the output.

I am running pytorch 0.4.0 here. My input for the tests consists of 500 images from mapillary.com (2048x1536). Besides the above mentioned changes I only reduced n_iter to 100, reduced printing of status lines/saving of images and models and switched to .png images (.gif produced the same results btw.).

Maybe you can answer some of my questions:

  • what type of images did you use as input?
  • what mistaks(s) am I making? :wink:
  • is this a problem or a warning that can be ignored for now:
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:1474: UserWarning: Using a target size (torch.Size([4])) that is different to the input size (torch.Size([4, 1])) is deprecated. Please ensure they have the same size.
  "Please ensure they have the same size.".format(target.size(), input.size()))

Thanks for any hints!

Hi! Thanks for asking, and sorry you are having trouble.

Are you resizing the images to 1024x1024? One of the tradeoffs that this makes is that it is expecting only RGB images at that size. Thus, the network architecture is not very flexible.

Try using the torchvision.transforms.Resize with a size of (1024, 1024) on your input and see if that helps.

Also, I haven’t used this repo with the newer releases of PyTorch (I believe I developed it on 0.2 something) so you are bound to see some deprecation warnings.

Hope that helps! :smile:

Hi and thanks for replying,

I thought using transforms.CenterCrop(image_size) should do the trick; but will try what you propose anyhow :wink:

Hi,

did what you suggested, but that didn’t alter the output :cry:

Can you post the errors that you are seeing? Looking at your original post again, it looks like you may be comparing images of different sizes in the loss function.

Hard to tell without seeing the exceptions, though.

If you can add 512x512 version into github, it helps. Thank you. Great work