How to implement Opacus in a multi-model setup?

In my work my model is split up into four components say m1, m2, m3, m4. The output of m1 is sent through m2, which is then sent through m3 and finally m4 gives the output. All models share a common optimizer, which holds the weights of all models and updates weights for all models.

It seems I cannot individually call make_private on m1, m2, m3 and m4 because I am getting some mismatched dimensionality error from module m2 onwards.

self.m1, self.DPOptimizer, self.DPDataloader = self.privacy_engine.make_private(
            module=self.m1,
            optimizer=self.optimizer,
            data_loader=self.data_loader,
            noise_multiplier=1.0,  # sigma
            max_grad_norm=self.clip,
        )
self.m2, _, _ = self.privacy_engine.make_private(
            module=self.m2,
            optimizer=self.optimizer,
            data_loader=self.data_loader,
            noise_multiplier=1.0,  # sigma
            max_grad_norm=self.clip,
        )
self.m3, _, _ = self.privacy_engine.make_private(
    module=self.m3,
    optimizer=self.optimizer,
    data_loader=self.data_loader,
    noise_multiplier=1.0,  # sigma
    max_grad_norm=self.clip,
)
self.m4, _, _ = self.privacy_engine.make_private(
    module=self.m4,
    optimizer=self.optimizer,
    data_loader=self.data_loader,
    noise_multiplier=1.0,  # sigma
    max_grad_norm=self.clip,
)

In this setting what is the correct way to implement the make_private function?

Thanks for reaching out and sorry for the delayed response. Any reason why you can’t chain the modules in a nn.Sequential?

I came across a similar issue. I’m building a synthetic data generation framework using attention. The final loss comes from multiple loss terms (each from different models), and I’m using a single optimiser to carry out the optimisation task. In this setting, how to use Opacus’s PrivacyEngine 's make_private function?