From a WGAN-GP tutorial where you create the gradient penalty.
penalty = (tf.norm(tf.gradients(D(interpolation), interpolation), axis=1) - 1) ** 2.0
Can you do this with a 1 liner too in Pytorch?
From a WGAN-GP tutorial where you create the gradient penalty.
penalty = (tf.norm(tf.gradients(D(interpolation), interpolation), axis=1) - 1) ** 2.0
Can you do this with a 1 liner too in Pytorch?
Sure u can,
torch.norm(torch.autograd.grad(outputs=D(interpolation), inputs=interpolation), p='chose/norm/by/name/here', dim=1) - 1) ** 2.0)