Hello,
sorry for the newbie question but: I am trying to learn pytorch by example with: https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html
Having successfully executed dcgan_faces_tutorial.ipynb with the provided 64x64 celeba dataset, i now try to adapt it to the 250x250 lfw dataset for more interesting results, adapting image_size, ngf and ndf to 250.
Dataset loading, Generator and Discriminator construction work out fine. However the training loop exits on:
ValueError Traceback (most recent call last)
in
-
27 output = netD(real_cpu).view(-1)*
-
28 # Calculate loss on all-real batch*
—> 29 errD_real = criterion(output, label)
-
30 # Calculate gradients for D in backward pass*
-
31 errD_real.backward()*
…
ValueError: Target and input must have the same number of elements. target nelement (128) != input nelement (18432)
Aparently, some other variables need adapting (batch size? latent vector?). Is there a formula for variable adaptation?
Any help would be greatly appreciated!!
Walter