Multi-CPU in 0.4

Hi,
Since switching from PyTorch 0.3->0.4 my cpu usage has jumped from single-core (25% utilization) to ~100%, however the prediction/training performance has not jumped more than 5-10% in my use-case (Small dense networks without GPU, for example 3x64 units)

This is limiting my ability to run multiple processes in parallel, or do preprocessing using the other cores. Is there any way to disable this multi-cpu usage to revert it to the old behavior?

Thanks

Hello,

it would seem that you are looking for torch.set_num_threads.
In ancient versions of pytorch (<0.3.1), there had been an issue with MKL threads but that seems long solved.

Best regards

Thomas

1 Like

Perfect thanks! Setting it to 1 gave me the behavior I was looking for.

I’m a little confused about how torch.set_num_threads should be used.

I have a function that imports torch, loads a model, and generates predictions from that model.

I want to call my function across several cores, using multiprocessing.pool.

Do I want to set_num_threads inside my function? Or in the script that calls multiprocessing.pool? Additionally, I’m a bit confused because torch.get_num_threads returns 1, before I’ve actually called set_num_threads. Does this mean setting to 1 is unneeded?

1 Like

Did you figure this out?

I did not, I’m sorry. I found these to be helpful for this and related issues, though: