I have 4 gpus in my machine. I want to use multiprocessing to creat 4 processing to train 4 models separately. Each processing will train 1 model in 1 GPU. The code is as follows:
import multiprocessing as mp
p_random = mp.Pool(processes=4)
process = []
for epoch in range(1, args.epochs):
sp = epoch % 4 # assign gpu
t = p_random.apply_async(train, (epoch, net_cluster['model{}'.format(sp)]))
process.append(t)
But when I moniter the information of gpus, each gpu creat 4 processes. Technically, I think each process use one specific gpu, so each gpu should have 1 process.
I am confusing about that. Is there anything wrong in my code?