PyTorch Lightning Support?

I’m trying to utilise opacus with the PyTorch Lightning framework which we use as a wrapper around a lot of our models. I can see that there was an effort to integrate this partially into PyTorch Lightning late last year but this seems to have stalled due to lack of bandwidth.

I’ve created a simple MVP but there seems to be a compatibility problem with even this simple model; it throws AttributeError: 'Parameter' object has no attribute 'grad_sample' as soon as it hits the optimization step.

What’s the likely underlying cause of this? I can see on the opacus GitHub that similar errors have been encountered before where it’s been caused by unsupported layers but as the gist shows, this model is incredibly simple so I don’t think it’s any of the layers.

This is with:


Hi James! Would you have time to file a bug and share a colab so I can take a look? Integrating with Lightning is indeed on our plate :slight_smile:

Sure thing; didn’t want to go straight to a bug report as it didn’t feel right

1 Like

Hi @Darktex @James_M,
was there some progress made with the interaction with PyTorch Lightning?
I wrote a single integration myself also using the LightningCLI. I’m initializing the PrivacyEngine in the before_fit hook of a custom LightningCLI and attaching it to the configure_optimizer function of a typical LightningModule.
It seems to work, but I would be curious if there’s a best practice way of integration.

Hello @NiWaRe @James_M e have not worked yet on integrating PyTorch Lightning with Opacus. As mentioned, it is on our roadmap. Meanwhile, if you could share your changes either in Google Colab or send out a pull request in Github we would consider your changes when we are ready to start work on this.

Thanks for describing your solution @NiWaRe, do you have a code snippet you could share?

@sayanghosh @amin-nejad sorry was busy last weeks. I can gladly share the snippets, concerning PR should I rather commit to a tutorial in a Jupyter Notebook or think about how to integrate it directly into the framework without the need to overwrite different hooks?

What I have for now (prototyping code, the hparams are the defined params in my PL Model):

class LightningCLI_Custom(LightningCLI):
def before_fit(self):
    """Hook to run some code before fit is started"""
    # possible because self.datamodule and self.model are instantiated beforehand
    # in LightningCLI.instantiate_trainer(self) -- see docs

    # TODO: why do I have to call them explictly here 
    #       -- in docs not mentioned (not found in .trainerfit())

    if self.model.hparams.dp:
        if self.model.hparams.dp_tool == "opacus":
            # NOTE: for now the adding to the optimizers is in model.configure_optimizers()
            # because at this point model.configure_optimizers() wasn't called yet. 
            # That's also why we save n_accumulation_steps as a model parameter.
            sample_rate = self.datamodule.batch_size/len(self.datamodule.dataset_train)
            if self.model.hparams.virtual_batch_size >= self.model.hparams.batch_size: 
                self.model.n_accumulation_steps = int(
                self.model.n_accumulation_steps = 1 # neutral
                print("Virtual batch size has to be bigger than real batch size!")

            # NOTE: For multiple GPU support: see PL code. 
            # For now we only consider shifting to cuda, if there's at least one GPU ('gpus' > 0)
            self.model.privacy_engine = PrivacyEngine(
                sample_rate=sample_rate * self.model.n_accumulation_steps,
                target_delta = self.model.hparams.target_delta,
            ).to("cuda:0" if self.trainer.gpus else "cpu")
            # necessary if noise_multiplier is dynamically calculated by opacus
            # in order to ensure that the param is tracked
            self.model.hparams.noise_multiplier = self.model.privacy_engine.noise_multiplier
            print(f"Noise Multiplier: {self.model.privacy_engine.noise_multiplier}")

            print("Use either 'opacus' or 'deepee' as DP tool.")

        # self.fit_kwargs is passed to in
            'model': self.model

        # in addition to the params saved through the model, save some others from trainer
        important_keys_trainer = ['gpus', 'max_epochs', 'deterministic']
                for important_key in important_keys_trainer
        # the rest is stored as part of the SaveConfigCallbackWandB
        # (too big to store every metric as part of the above config)
    # track gradients, etc.

Then in my model:

class LitModelDP(LightningModule):
    def __init__(...):
      # disable auto. backward to be able to add noise and track 
      # the global grad norm (also in the non-dp case, lightning 
      # only does per param grad tracking)
      self.automatic_optimization = False

     # manual training step, eval, etc.

     def configure_optimizers(self):
        optims = {}
        # DeePee: we want params from wrapped mdoel
        # self.paramters() -> self.model.wrapped_model.parameters()
        if self.hparams.optimizer=='sgd':
            optimizer = torch.optim.SGD(
        elif self.hparams.optimizer=='adam':
            optimizer = torch.optim.Adam(

        if self.hparams.dp_tool=='opacus' and self.hparams.dp:

        optims.update({'optimizer': optimizer})
        return optims

Calling the LightningCLI at the end

cli = LightningCLI_Custom(model_class=LitModelDP, datamodule_class=[...])

I’m very open to feedback and would also gladly help to integrate that into PL. :+1:

1 Like