Hey there, we are trying to train a private version of a particular model which uses nn.Parameter and getting the error torchdp.dp_model_inspector.IncompatibleModuleException
.
In particular, the parameters are defined for the model here and used in the forward function here. To the best of our knowledge these parameters and associated operations preserve privacy because they don’t compute any aggregate batch statistics. What would be the recommended way about being able to train with this model definition.
Is there some sort of workaround we could do to wrap these lines in a valid module? Do we need to wait for the team to add an accepted module to opacus.SUPPORTED_LAYERS
?