Influence of `max_grad_norm` on DP parameter `epsilon`

I ran the MNIST example provided on github with different choices of argument c (max-per-sample-grad_norm) which determines the max_grad_norm parameter in privacy_engine. The other arguments are the set to their default values. However, the output DP parameter epsilon does not change with different values of max_grad_norm. The screenshots of the output are provided below.

I am very confused because I would expect that epsilon would be smaller for smaller max_grad_norm according to the theory of DP. Is there anything wrong with the code? Thanks.


python -c 1
python -c 10


Hi @QianZhang20

As far I understand that the privacy budget (epsilon) expended so far each epochs Is depending on two arguments

  • delta: target delta
  • alphas: List of RDP orders (alphas) used to search for the optimal conversion
    between RDP and (epd, delta)-DP
    you can see here link
    and changing the max_grad_norm should not influence the privacy budget
1 Like

@zoher has provided a correct answer. Although the standard deviation of the noise added in each iteration is proportional to clipping_norm * noise_multiplier (thus dependent on clipping_norm), it is noise_multiplier that decides the final epsilon. In other words, clipping_norm only affects the mode accuracy.