One of the variables needed for gradient computation has been modified by an inplace operation error

Hi,

The problem is that in this for loop: https://github.com/tamarott/SinGAN/blob/e1384a9f6dfa45497f4aed5f3e52466d4200fcfb/SinGAN/training.py#L173 you reuse the same fake commputed with the original netG above.
But when you do the first optimizerG.step(), then you modify the weights of netG inplace and so you can’t backprop through that graph again.
You should recompute fake at every iteration with the new netG.

1 Like