Out of memory with DRAGAN

Hello! I’m trying to reproduce this paper in code. It uses combination of losses for training GAN. One of the losses I need is DRAGAN loss. On every iteration of GAN lerning process I compute this loss, add it other losses and finally run backwards. Code for loss is:

            images = FloatTensor(get_data_minibatch()[0])
            p_images = images + 0.5 * torch.std(images) * torch.rand(images.size()).cuda()
            p_images = torch.clamp(p_images, -0.99, 0.99)
            p_images.requires_grad_(True)
            D_p_images = discriminator(p_images)[0]
            grad = torch.autograd.grad(outputs=D_p_images.sum(), inputs=p_images, create_graph=True)[0]
            grad = torch.flatten(grad, start_dim=1)
            grad_norm = torch.norm(grad, p=2, dim=1)
            target_value = torch.subtract(grad_norm, 1)
            L_gp_D = torch.square(target_value).mean()

When I add this loss to code, after some training time I recieve out of memory on RAM, not GPU’s memory. I think that there is memory leak in this code, but I don’t understand how to fix it.

I can send full notebook if needed.

Here is the DRAGAN loss:
изображение

So, how to fix it?