Following code is the G network update part of “dcgan” example (examples/dcgan folder).
############################
# (2) Update G network: maximize log(D(G(z)))
###########################
netG.zero_grad()
label.data.fill_(real_label) # fake labels are real for generator cost
output = netD(fake)
errG = criterion(output, label)
errG.backward()
D_G_z2 = output.data.mean()
optimizerG.step()
“optimizerG” is defined before as.
optimizerG = optim.Adam(netG.parameters(), lr = opt.lr, betas = (opt.beta1, 0.999))
Here, “errG” is from “output” and “output” is from “netD”. How the “errG” and “netG” are connected? I mean which lines of codes make the “errG” be backpropagated thru “netG”, even though there is no explicit link?