Torch.multiprocessing for model inference

Have you have any idea now? Is it faster to use multiprocessing on inference?

I get confuse on this to and below topic may help
Multiprocessing CUDA memory