_run_finalizers and _cleanup warning when doing multi-GPUs training with Pytorch Distributed module DDP

Is it possible that this happens since the worker do not shut down?

I saw this post and it suggest to shutdown the workers after our computation is finished using dataloader._iterator._shutdown_workers() or del dataloader._iterator

Does it make sense?