Hi !
In the GAN example (https://github.com/pytorch/examples/blob/master/dcgan/main.py),
while training the D-network on fake data:
# train with fake
noise.resize_(batch_size, nz, 1, 1).normal_(0, 1)
noisev = Variable(noise)
fake = netG(noisev)
labelv = Variable(label.fill_(fake_label))
output = netD(fake.detach())
errD_fake = criterion(output, labelv)
errD_fake.backward()
D_G_z1 = output.data.mean()
errD = errD_real + errD_fake
optimizerD.step()
I want to get the gradient of errD_fake with respect to fake variable (i.e., fake = netG(noisev)).
However, fake.grad is none even though fake.requires_grad is true (I printed the fake.grad after errD_fake.backward()).
Is this an issue or there is something that I don’t know about?
Thanks in advance,
Kimin Lee.