Distributed 1.8.0 logging twice in a single process, same code works properly in 1.7.0

is it possible that torch distributed is defining a root logger itself?

The reason why I think that may the case is because, for example, messages from my code coming through this unwanted logger look like this:

INFO:midaGAN.utils.environment:PyTorch version: 1.8.0
INFO:Validator:Validation started

while the messages from DDP look like this:

INFO:root:Reducer buckets have been rebuilt in this iteration.
INFO:root:Added key: store_based_barrier_key:2 to store for rank: 1
INFO:root:Added key: store_based_barrier_key:2 to store for rank: 0

Could it be something with [DDP Logging] Log comm. hook in ddp logging by rohan-varma · Pull Request #52966 · pytorch/pytorch · GitHub?