Gradient Penalty for GANs in pytorch

I am trying to implement this tensorflow code from a method which implements gradient penalty:

# gradient penalty
real_grads = tf.gradients(tf.reduce_sum(real_output), source)[0]
gp = tf.reduce_sum(tf.square(real_grads), axis=[1, 2, 3])
gp_loss = tf.cast(tf.reduce_mean(gp * (10 * 0.5)), tf.float32)

A solution for this in pytorch or a pointer to the solution would be greatly appreciated, thx.

You can compute the gradients via e.g.:

grads = torch.autograd.grad(out, model.parameters())

and could then compute the penalty:

p = (grads.sum([1,2,3]) * 5).mean()

Thanks a lot for your input. In the end I managed to implement their gradient penalty and resolve the issues related to torch.autograd.grad() with the following piece of code:

# gradient penalty
grads = torch.autograd.grad(real_score.sum(), s, create_graph=True)[0]
gradient_penalty = torch.mean((grads**2).sum([1,2,3]))