[Differential Privacy] Does Autograd need the input tensor when performing the backward pass?

Hi.

We are undertaking research into the applications of Differential Privacy (DP) to neural networks. Building on some existing work [1] we are attempting to understand the internals of Pytorch’s Autograd, to see where it may be appropriate to inject DP to different parts of a neural net.

One of the core principles of DP is that once a DP mechanism has been applied to an input, any postprocessing applied to its output will remain DP, provided that the postprocessing does not involve information derived from the original dataset. That is to say that once we have privatized some data, we can’t ‘look’ at the original dataset again, or else we have broken privacy.

One problem we are stuck on, is whether the backward() function requires multiple ‘looks’ at the original Tensor. From watching lectures and reading Autograd’s documentation, it appears that the backward pass is calculating a graph with each node representing a gradient calculation, and that we only require the original Tensor to be used a single time time when traversing this graph.

However, it has been difficult to find direct code sources or references to validate this. [2] appears to specify that the input to the backward function only needs to be the Tensors that the forward function returns. However, this is quite abstract, and it has been difficult to find specific implementations within the codebase.

I was hoping there would be someone who would know generally whether we require using the input Tensor multiple times when calculating a gradient, and if possible could provide a reference to a part of the codebase that would show this clearly?

Thank you :slight_smile:

[1]: Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and
Li Zhang. 2016. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS ’16). Association for Computing Machinery, Vienna, Austria, 308–318. DOI: 10.1145/2976749.2978318.
[2]: pytorch/function.py at 089203f8bc71c64688def521284e32aee3fb5989 · pytorch/pytorch · GitHub