I am stuck with something I was hoping to get some help with. I am trying to train a differentially private multilabel multiclass model on the NIH Chest X-Ray dataset using the Opacus framework. I have tried different settings for the hyperparameters, i.e., max_grad_norm and noise_multiplier, but my model consistently suffers from a considerable accuracy loss irrespective of the hyperparameter settings. As a sanity test, I set the noise_multiplier to 0 and max_grad_norm to high values within the range of 10 to 1000000. With these settings, the model should ideally behave like a regular non-differentially private model, but that’s not the case. The AUROC score I get with these settings is ~0.5, whereas when I train the model without the privacy_engine() wrapper, like a regular CNN model, the AUROC score is ~0.8.
I am confused as to why there’s a huge accuracy drop when I set the settings of the differentially private model in a way that the model is not really differentially private ( e.g. noise_multiplier = 0, max_grad_norm = 1000).
I’d greatly appreciate it if anyone who has worked on this could help me with it.
P.S. Here’s the link of the baseline code I am using. To create an instance of a differentially private model, I simply call the privacy_engine() wrapper, keeping everything else as it is.