I’m working with a WGAN where the model is initially using
BatchNorm2Ds. Using the
BatchNorm2Ds, my code runs without a problem. However, when I try to drop-in replace the
InstanceNorm2Ds, then I end up with the
Trying to backward through the graph a second time error.
Having worked with WGANs, I’ve seen this error before. However, I don’t believe it should occur when replacing
InstanceNorm2Ds, and applying no other changes. Attempting to debug the issue, I added
retain_graph=True to all my
backward()s and to the WGAN’s
torch.autograd.grad(), but still the error occurs.
Does anyone happen to have an explanation as to why this should occur for
InstanceNorm when it doesn’t for
BatchNorm? And how I might be able to resolve this? Thank you!