I know there are similar posts. I nearly checked them all. Tried .detach(). and retain_graph=True options. However I couldn’t solve it for my training loop of DCGAN. This is a fairly complex training loop for me so I wouold be glad for any help. I also checked the PyTorch tutorial and created this version.
Problem is from this line generator_loss.backward(). Also when I remove the line discriminator_optimizer.step() the problem disappears.
Full error: RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
As a general principle, don’t just try things. Analyze the actual cause
of your specific issue and use the “option” that actually addresses that
Please look at the comments that I’ve added in line to your quoted code:
The key issue is that you are using discriminator_fake_out to
optimize both discriminator and generator (leading in this particular
attempt to the “backward a second time” error). If you try using retain_graph = True without doing things just so, you are
likely to get an inplace-modification error.
Probably the simplest way to address this issue with training GANs is
to rebuild the discriminator computation graph by calling – at added
computational cost – discriminator_fake_out = discriminator (...)
twice, once for the discriminator_fake_loss.backward()
backpropagation and then again for the generator_loss.backward()
Yes, as explained in the in-line comments I added to your code.
I don’t believe this. For the version of the code you posted, you will get the
“backward a second time” error after calling generator_loss.backward(),
regardless of whether discriminator_optimizer.step() was called or not.
Perhaps in a different version of your code you had an inplace-modification
error that removing discriminator_optimizer.step() appeared to fix.
If you have (or do) come across inplace-modification errors, it will be
because discriminator_optimizer.step() is modifying discriminator’s
A discussion about fixing inplace-modification errors that includes a
toy-GAN example can be found in this post:
Yes. One approach to not recalculating discriminator_fake_out is
illustrated in the toy-GAN example in the post I linked to above.
The basic idea is to modify discriminator so that its forward pass uses clone()s (where necessary) of its parameters because they get modified
inplace when you call discriminator_optimizer.step().
No. This will not backpropagate through discriminator (and will therefore
not backpropagate through generator). After calling .detach(), the new
tensor referred to by discriminator_fake_out is not part of any computation
graph, so any backpropagation stops at that point.
Calling .requires_grad_() does not reconnect discriminator_fake_out
to the computation graph. (As a general rule resetting requires_grad to True to “fix” an issue is an error. Doing so can suppress reporting of some
error messages, but this doesn’t mean your code it correct.)