In order to improve the utilization of the graphics card, I decided to run multiple models at the same time. I find that torch.multiprocessing could finish it.

Here is my code.

import torch.multiprocessing as mp

pool = []

for i in range(num_workers):

model = torch.load(…).eval()

p = mp.Process(

target=target_function,

args=(arg1, arg2, model)

)

p.start()

pool.append(p)

for p in pool:

p.join()

However, all the params of model in target_function is 0. I guess there is some problem in passing the model parameters. So how to slove this problem.