Can dist.TCPStore store NamedTuple?

I want to share some NamedTuple (such as mytuple) between different rank. The following codes is possible?

    tcpstore = dist.TCPStore(MASTER_ADDR,  MASTER_PORT, world_size,
                             MASTER_ADDR == LOCAL_ADDR)
    dist.init_process_group('nccl', store= tcpstore, rank=rank, world_size=world_size)
    if rank == 0:
        store.set("my1", mytuple)
    else:
        id = store.get("my1",mytuple)

I have read the
How to store embeddings from different ranks in DistributedDataParallel mode? - #4 by mrshenli . But I want to know if I have 8GPU, how could I init and pass 8 simplequeue?

@JuyiLin could you share more about your motivation? dist.Store is only intended to be used by process group init, it’s not exposing to public arbitrary usage, it might work out of box for some cases, but it’s not guaranteed.

Specifically if you want to share tuple of tensors, you can dist.broadcast each tensor to each rank

Thank you for your time! I have tried to use dist.scatter_object_list, but it failed. Could you have a look?