Memory Leak when using autograd.grad with (create_graph=True, retain_graph=True) in improved wgan

Hello,

I tried using the implementation of “Improved WGAN” from: https://github.com/caogang/wgan-gp.

It requires the latest master version of pytorch to allow computation of a second derivative with respect to a gradient calculated with autograd.grad, as they encountered a bug in the latest release version.

I tried to run the gan_mnist.py file both with the most current master version of pytorch (0.2.0+75bb50b) and with an older commit (0.2.0+c62490b). I also tried copying only the function calc_gradient_penalty() and integrating it into my code.

I both cases there is a memory leak - the gpu memory used keeps increasing in each iteration, until the program ends with out of memory error.

when erasing the call to autograd.grad there is no memory leak and the code runs smoothly.
Also, surprisingly, integrating the function calc_gradient_penalty() to cycle GAN does not give memory leak.

After reading advice in this forum about memory leak - I keep the use of Variables to the necessary minimum and always try to work with tensors when possible. It did not help me.

Can you direct me as to what might be my problem? Thanks! :slight_smile:

Hi, did you ever find a solution for this? I also have a memory leak in a simple GAN. Not sure how to plug it