Hello,
I have a model that I trained on GPUs. Now, I want to use it on my test dataset.
The dataset is large, and what I would like to do is use python multiprocessing to load each image, make patches from them, and forward them to the model on CPUs.
The problem is that I get different speeds when I use a different number of CPUs.
My questions are:
- Does the number of CPUs affects the model?
- I init() and load the model once, and then let’s say I use 48 CPUs which means 48 images simultaneously, so how that one model predicts 48 patches at same time?
- What are the differences between
TORCH.MULTIPROCESSING
andpython multiprocessing
? Now, I use the latter one, should I move to the first one? - What are the differences between
TORCH.MULTIPROCESSING
andnn.DataParallel
?
I found this issue, but I am not sure if they are related!
Thanks,