Newbie Question: dcgan_faces_tutorial variable adaptation

sorry for the newbie question but: I am trying to learn pytorch by example with:

Having successfully executed dcgan_faces_tutorial.ipynb with the provided 64x64 celeba dataset, i now try to adapt it to the 250x250 lfw dataset for more interesting results, adapting image_size, ngf and ndf to 250.

Dataset loading, Generator and Discriminator construction work out fine. However the training loop exits on:

ValueError Traceback (most recent call last)

  • 27         output = netD(real_cpu).view(-1)*
  • 28         # Calculate loss on all-real batch*

—> 29 errD_real = criterion(output, label)

  • 30         # Calculate gradients for D in backward pass*
  • 31         errD_real.backward()*

ValueError: Target and input must have the same number of elements. target nelement (128) != input nelement (18432)

Aparently, some other variables need adapting (batch size? latent vector?). Is there a formula for variable adaptation?

Any help would be greatly appreciated!!


Based on the error message it seems that netD is not returning a probability for each sample in the batch (which would have the shape [batch_size=128]), but instead an output of [batch_size=128, 1, 12, 12] due to the increased spatial size. After the view(-1) operation this would create a tensor of 128*12*12=18432.
You would have to make sure the output is a single pixel, e.g. by adding more layers (pooling, strided convs etc.).