You are most likely running into this issue trying to compute gradients with stale forward activations after a parameter update was performed.
The link shows a code snippet and describes the issues in more detail. Based on your code I would assume:
# Do backpropagation for discriminator
discriminator_loss.backward(retain_graph=True)
discriminator_optimizer.step()
# Do backpropagation for generator
target_loss.backward()
fails, since target_loss.backward()
would need the original parameters of the discriminator
to compute the gradients, while these were already updated in discriminator_optimizer.step()
thus creating stale intermediate forward activations.