DDP training log issue

similar to this? Distributed 1.8.0 logging twice in a single process, same code works properly in 1.7.0 - #7 by ibro45
PyTorch sets up the loggers somewhere, rebuilding the log handers it as mentioned solves the problem. Personally, I went for loguru as it’s even easier to do that with it.

1 Like