Is there a standard procedure to check the consistency of environment across all nodes in PyTorch DDP training?

In a distributed environment, maintaining consistency across all nodes (e.g., driver versions, NCCL versions) can help reduce some training issues caused by the environment, such as performance or compatibility problems, and make troubleshooting easier. I am wondering if PyTorch provides any standard hooks or other methods to support this kind of check?

In my current setup, I have multiple nodes with different hardware and software configurations, and I want to ensure that the environment is consistent across all nodes before starting the DDP training. Is there any built-in functionality in PyTorch to perform these checks, or do I need to implement a custom solution to compare the environments?

Any suggestions or best practices on how to handle this situation would be greatly appreciated. Thank you in advance for your help!