Error when trying Federated Learning with Opacus

Hi @general and thanks for your question.
The reason for the issue you’re running into is that you directly manipulate model weights during model aggregation stage, which GradSampleModule doesn’t expect (it uses certain private attributed attached to parameters and doesn’t know what to do when they’re missing)

I can suggest one way to work around this - only use GradSampleModule during the training process, while storing and aggregating the original unwrapped model.

As you call privacy_engine.make_private() on every round, this shouldn’t be a problem for you. You can simply do self.model = model.to_standard_module() at the end of the client training loop (instead of self.model = model you have now), it should do the trick. This was you’ll store the original model, and GradSampleModule will initialize all the required attributes at the beginning of each round as necessary.

Hope this helps