How to modify all_reduce operator in PyTorch?

Thanks for quick and kind reply @enisberk! I wonder that if I use DistributedDataParallel, it performs all_reduce automatically or not? I asked the same question here but want to ask here again and will remove the thread if it is clarified.