Restoring the original model

Hi everyone,
I’m using Opacus and I have a very specific question
Basically, I want to alternate the training using differential privacy with the training without DP:

  1. I wrap the model with make_private and I train this “private model” for some epochs
  2. Then I want to unwrap the model and restore the original one to train a “non private model” without differential privacy

Is it possible to do this? Can I restore the original model after I wrapped it using make_private?
I also have another question, assuming that it is possible to unwrap the private model, is it possible to alternate the training of the private model with the training of the original model without calling every time make_private?

Thank you and sorry for the “strange” question.

Hey luc12, thanks for reaching out!

To alternate between DP and vanilla training, you have two options:

  1. Keep the model wrapped in a GradSampleModule but change the noise multiplier to be 0 for vanilla training (you would still continue to clip per-sample gradients and you would still have Opacus training time versus faster vanilla training time).
  2. Checkpoint the model._module.state_dict() and load it into a fresh, non-private model when you wish to switch. There may be other ways to remove the backward hooks with their handles but the checkpointing method seems to be the safest.

Of course, the accounting has to be adapted. Hope this helps!

Pierre

1 Like

Thank you, in the end I used the second solution!