How to use multi-GPU to parallel computing

My code is as follows:

as it shows the training process for net1 and net2 is uncorrelated, so I want use devices=1 to train net1 and devices=2 to train net2 parallel .
How can I realize it?

@cold_wind
You can fork two processes for training in parallel using command line like following
CUDA_VISIBLE_DEVICES=0 python train_net1.py
CUDA_VISIBLE_DEVICES=1 python train_net2.py
Also you can do the same with python code using Queues, train both nets in parallel, wait till both will end training and then test them.
For example this link https://dmitryulyanov.github.io/if-you-are-lazy-to-install-slurm/ can be useful