Geometric mean of model weights

How do I compute geometric mean of the weights and biases in a federated learning settings?
I’m able to compute the arithmetic mean, only.

Hi en!

You can calculate the geometric mean directly. Suppose that you
have the weights and/or biases whose geometric mean you want
in a single tensor, w. Then:

geom_mean = w.prod()**(1 / w.numel())

Note, it might be preferable to perform the calculation in log-space:

geom_mean = w.log().mean().exp()

(The geometric mean only makes sense – for some definition of
“makes sense” – if the values are all strictly positive. As you can
see, both the fractional power in the direct-space calculation and
the log() in the log-space calculation will fail for negative values.)

Best.

K. Frank

Thanks for your response Frank. I tried your approach, but it didn’t work for my case.
Instead of the mean computation below, I want the geom.

        model_dict[k] = th.stack([h_models[i].state_dict()[k].float()*(n*h_lens[i]/total) for i in range(len(h_models))], 0).mean(0)```

code source: (https://towardsdatascience.com/preserving-data-privacy-in-deep-learning-part-3-ae2103c40c22)