Training 10 models at once with torch distributed

Hi everyone,
I have 10 ConvNet models (e.g. ResNet, DenseNet, …). I was successfully trained these models one by one with torch distributed, but I want to write a script that train these model at once with for loop.

def main(args):

return a

for model in models:
args.model = model
a = main(args)
But if I end the main function with return the process is hanged so I replace it by sys.exit(0) that cause the loop only execute one time.
Any recommend for this situation.