Double Backpropagation in Pytorch

Hey guys!

Does anybody know how to implement double backpropagation in Pytorch? I’m specifically referring to the LeCun paper “Improving generalization performance using double backpropagation” (and it’s recent use for adversarial training in “Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing Their Input Gradients”).
The idea is to use the square norm of the gradient of the criterion with respect to the input as a regularization term to decrease the rate of the change of the weights.
Given that we would have to call the backward function of the criterion to have access to the gradients of the input, I guess there’s no easy way of doing it in one shot like in the papers.

2 Likes

Hi, by any chance have found a Pytorch implementation? (Pytorch >= 1.3.1)
Thanks

I found a similar question here and another blog.

1 Like