Demo for GPU distributed system with Pytorch

We are going to train our model based on pytorch on a GPU distributed system. There are several nodes with 4 v100 GPUs for each node on the GPU distributed system. The GPU distributed system takes slurm as resource scheduling system. I wonder whether there is a example program about pytorch on slurm GPU distributed system. I will appreciate it if any give me some advice.