Hi everyone,
I am wondering whether there is anyway for pytorch distributed to build one concurrent queue(or buffer) between parameter server and workers.
So that, every worker can work as a producer to send the msg to the concurrent queue.
And the parameter server can work as consumer to consume msg from concurrent queue.
Besides, parameter server can detect the length of the concurrent queue.
Thank you!