When I’m learning GAN using pytorch ,an example given as following:
opt_D.zero_grad()
D_loss.backward(retain_variables=True)
opt_D.step()
opt_G.zero_grad()
G_loss.backward()
opt_G.step()
Is this parameters “retain_variables=True” means pytorch will fix the discriminators gradients?
If not why this example don’t fix D’s grads?