In my work my model is split up into four components say m1, m2, m3, m4
. The output of m1
is sent through m2
, which is then sent through m3
and finally m4
gives the output. All models share a common optimizer, which holds the weights of all models and updates weights for all models.
It seems I cannot individually call make_private on m1, m2, m3 and m4
because I am getting some mismatched dimensionality error from module m2
onwards.
self.m1, self.DPOptimizer, self.DPDataloader = self.privacy_engine.make_private(
module=self.m1,
optimizer=self.optimizer,
data_loader=self.data_loader,
noise_multiplier=1.0, # sigma
max_grad_norm=self.clip,
)
self.m2, _, _ = self.privacy_engine.make_private(
module=self.m2,
optimizer=self.optimizer,
data_loader=self.data_loader,
noise_multiplier=1.0, # sigma
max_grad_norm=self.clip,
)
self.m3, _, _ = self.privacy_engine.make_private(
module=self.m3,
optimizer=self.optimizer,
data_loader=self.data_loader,
noise_multiplier=1.0, # sigma
max_grad_norm=self.clip,
)
self.m4, _, _ = self.privacy_engine.make_private(
module=self.m4,
optimizer=self.optimizer,
data_loader=self.data_loader,
noise_multiplier=1.0, # sigma
max_grad_norm=self.clip,
)
In this setting what is the correct way to implement the make_private function?