Hi there. I am currently doing some Proof of Concepts for DCGAN with rectangular images. I am a completely new to DCGANs as well.
class Discriminator(nn.Module): def __init__(self,ngpu): super(Discriminator, self).__init__() self.ngpu = ngpu self.main = nn.Sequential( nn.Conv2d(nc,ndf,2,2,1,bias=False), nn.LeakyReLU(0.2,inplace=True), nn.Conv2d(ndf,ndf*2,2,2,1,bias=False), nn.BatchNorm2d(ndf * 2), nn.LeakyReLU(0.2,inplace=True), nn.Conv2d(ndf*2,ndf*4,2,2,1,bias=False), nn.BatchNorm2d(ndf * 4), nn.LeakyReLU(0.2,inplace=True), nn.Conv2d(ndf*4,ndf*8,2,2,1,bias=False), nn.BatchNorm2d(ndf * 8), nn.LeakyReLU(0.2,inplace=True), nn.Conv2d(ndf*8,1,2,1,0,bias=False), nn.Sigmoid() ) def forward(self,input): return self.main(input)
Currently, this is the code to my Discriminator, where nc=3 and ndf = 64. My images are in 72 (width) x 14 (height). I have also modified to only use 2x2 as the kernel size.
When I am training the Discriminator, I realised one problem that has appeared is there is an error being thrown when I am calculating the loss. The loss used is BCELoss().
real_cpu = data.to(device) b_size = real_cpu.size(0) #batch size #batch size 64, thus it prints out 64 labels. make sense label = torch.full((b_size,), real_label, dtype=torch.float, device=device) output = netD(real_cpu).view(-1) print(real_cpu.size()) print(netD(real_cpu).size()) print(output) print(netD(netG(torch.randn(b_size,nz,1,1))).size()) errD_real = criterion(output, label)
Based on the code above, the variable ‘output’ throws a size of 320, where it is supposed to show only 64. Is it possible to get some advice on this issue? Would appreciate if I could get some help. Thank you!