How to train network parallelly with different parameter settings?

I am trying to train the model with different parameter settings, in parallel.

The will create the network and load the train+test data using the DataLoader in pytorch. It will be ok if I only run python --params1 ... in a terminal.

When I run another python --params2 ... in another terminal, the training of the network with params1 will be interrupted without any warning or output info. Meanwhile, the training of network with params2 will be interrupted after that. The training will be normal only when I close both the two terminals, and run the in a new terminal. I found the conflict may be caused by the DataLoader of the same dataset(train+test) Running multiple shell instances with different parameters at the same time.

Is there any way to train the models in pytorch (models with different parameter settings), in parallel?

Any help would be greatly appreciated