RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1]] is at version 2; expected version 1 instead

I thought optimizer_D step and optimizer_d are optimizing different networks. Though they share the loss_gan .

Yes because the loss-gan part is shared, parameters (which are saved for backward) modified in-place by the first optimizer step will be used in the second backward. I’m not sure your code includes the part where the optimizers are defined (I doubt that that is relevant though)

Could you give me more hints with some demo code on how to clone parameters?

There’s a way to apply this clone automatically to any model: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation (Meta Learning) - #12 by soulitzer

I took the first optimizer step after optimizer D but still got the errors.

Even though the cloning should fix this issue, I’d suggest looking into this error more to see if we can avoid the extra overhead of cloning. Was this the same error? Could you post the stack trace. (Or if possible, post a short runnable snippet to demonstrate the issue)