D_real is an autograd.Variable, and backward is a method associated to D_fake. What is the argument “mone” for?
Same for the argument “one” in:
D_fake = netD(inputv).mean()
D_fake.backward(one)
Does it have something to do with the labels for fake and real? Wouldn’t we just need to call .backward() with no arguments? This seems to be the standard source for WGAN-GP. I see a similar code in here, but I guess it was modified from the previous.
The -1 is multiplied to the gradient, so the loss term contains it negatively. The 1 is probably not needed, but we all copied it from the DCGAN in the pytorch examples or the WGAN code.
For DCGAN and plain WGAN I can see the advantage over adding loss components (memory consumption), fir WGAN-GP its probably just copy-paste without thinking, at least that is the case for me.