Adaptive top-k selection in machine learning

In federated/ distributed learning, the server will send initially a global model to clients, and each client will train the model locally and then select the top k values, and send only these values to the server.

How I can select an adaptive k in each client? rather than set top k value to fixed number (e.g. k=3 , which return top 3 values), I want to make the top k values adaptive, for example, some clients will send top 4 values, other may send 6 top values based on a defined feature ( largest value, largest loss , … etc)

Is there any way to do that?

Appreciate any help, thanks!

You can code up any of these as you wish, but to not ruin efficiency, it is best to keep the number of points where the CPU gets tensors from the GPU to do control flow smallish.

Best regards

Thomas

I guess you want to ask how you can dynamically communicate num of results. I don’t think c10d collective supports that and you can consider using RPC for this. Let us know if this makes sense or not.

No, it is not what I mean.
I mean: I want to reorder the large weights from clients in descending way, then taking the top large weights. let say some clients may send 3 top weights, other may send top 2 weights. so the top values will be dynamic not static. usually top k values is static (e.g., k = 3 , which takes only top 3 values), how to make it dynamic using a threshold ?
How to do that? is there any algorithm or pseudo code to do that?