Parallel training Multiple Models on Multiple GPUs with different data


I have a working NN that simply trains to optimize a set of variables given some input data. I have hundreds of sets of data, and so far have been training each instance sequentially using a for loop. I was wondering if there’s something similar to parfor function in Matlab, where I can train multiple separate models in parallel, each on its own GPU, given its own separate data? Each model is independent, I don’t need them to update each other or communicate in anyway to each other. I simply have one machine, multiple gpus, and want each gpu to train its own model, save it once it’s done, and start training the next model in the for loop.

Thanks in advance.