Torch fx and distributed training

Can I use torch.fx to generate a distributed training program by inserting communication operations?