DP-SGD using Opacus

Hi all,

I have followed tutorials regards DP Image Classification using Resnet18.
I have some questions:

  • When a model has many layers, it wasn’t able to convergence under DP. Are there any recommended approaches to overcome this problem for large models with many Fully connected layers?
  • When I decreased batch_size using the same model (due to memory size 8GB), the loss goes up to 80-200. That’s mean it is challenging to make a large batch with DP. Large batch size sometimes helps to improve model accuracy.
  • With trying different DL models, training non-private model takes a reasonable time (e.g., 6 minutes) compared to the private model (19 minutes) with lower accuracy.

Is DP-SGD very slow due to per-sample gradients? Is there any way to speed up processing time?

Can we achieve a comparable accuracy with the baseline model under a modest privacy budget?

Is the DP-Deep Learning model more sensitive to hyperparameters like batch size and noise level or the structure of NN?

Thanks,