Customized loss function while using opacus in personalized federated learning

Hi. I implement a mean-regularized multi-task learning framework using a loss function like this:


While adding no noise, it works well: Average accuracy of clients increases and then decreases as lambda(the personalization degree) starts to increase from zero.
When I make clients do DP-SGD using opacus, this rise-then-fall pattern doesn’t exist. The average accuracy always stays the same as if all clients train their models completely locally, which means the prox term in loss function somehow disappears. Anyone knows if the loss function is really the problem here?

BTW, when using opacus in FedAvg setting without personalization, the code works well. I will really appreciate any thougnts.

I am guessing I need to tune the max_norm of DP-SGD…

Now I know max_norm is kind of irrelevant. So I guess the prox_term is never added to any weight.grad_sample?

Oh I see this is a bug from github…

Will simply setting grad_sample_mode="functorch" in the call to make_private() solve this problem?

no. and ‘‘ew’’ does not work too

Hi, I manually add the prox term after loss.backward() and before optim.step() like this:

    for p, diff in zip(self.model.parameters(), n_w_diff):
        p.grad_sample.data = p.grad_sample.data + diff.data

Is there something I didn’t pay attention to? I got some weird results.