About autograd in GAN example

Hi !
In the GAN example (https://github.com/pytorch/examples/blob/master/dcgan/main.py),
while training the D-network on fake data:

    # train with fake
    noise.resize_(batch_size, nz, 1, 1).normal_(0, 1)
    noisev = Variable(noise)
    fake = netG(noisev)
    labelv = Variable(label.fill_(fake_label))
    output = netD(fake.detach())
    errD_fake = criterion(output, labelv)
    errD_fake.backward()
    D_G_z1 = output.data.mean()
    errD = errD_real + errD_fake
    optimizerD.step()

I want to get the gradient of errD_fake with respect to fake variable (i.e., fake = netG(noisev)).
However, fake.grad is none even though fake.requires_grad is true (I printed the fake.grad after errD_fake.backward()).
Is this an issue or there is something that I don’t know about?

Thanks in advance,
Kimin Lee.

Pytorch only keeps the leaf gradients.
You could save them yourself using hook, as discussed here How do I calculate the gradients of a non-leaf variable w.r.t to a loss function?

Best regards

Thomas

1 Like

Thanks!

I know what is the problem. Your comment is really helpful.

Best regards,

Kimin Lee

@pokaxpoka Could you please share the final solution to this problem? still I have the same error.