Pytorch distributed calling init_rpc() -> rpc.shutdown() -> init_rpc()

Is is supported to re-initialize the pytorch distributed RPC framework after it has been shut down previously, from the same process?

e.g.

rpc.init_rpc(...)
rpc.shutdown()
rpc.init_rpc(...)

it should work. Did you try? did it work for you?

it should work with FileStore, there is still some issue for using TCPStore if you call init_rpc() twice

Ah, I am using TCP communication. I was getting hangs. Is there a workaround?