Use multiprocessing to parallel models

I have 4 gpus in my machine. I want to use multiprocessing to creat 4 processing to train 4 models separately. Each processing will train 1 model in 1 GPU. The code is as follows:

    import multiprocessing as mp
    p_random = mp.Pool(processes=4) 
    process = []
    for epoch in range(1, args.epochs):
        sp = epoch % 4 # assign gpu
        t = p_random.apply_async(train, (epoch, net_cluster['model{}'.format(sp)]))
        process.append(t)

But when I moniter the information of gpus, each gpu creat 4 processes. Technically, I think each process use one specific gpu, so each gpu should have 1 process.
I am confusing about that. Is there anything wrong in my code?

https://pytorch.org/docs/stable/notes/multiprocessing.html

This document might help you.

Thanks for that. I have checked this page. But I didn’t find the answer.

When I use the pool to create 4 processes, and I assign each process to one gpu, but finally I get 4 processes in each gpu. I am confusing about that.

Could this example code work on your computer as expected?