Pytorch ignores set_num_thread

I am new to Pytorch and when I ran the neural language model from the tutorial page I noticed that the program was using one out of four of my machine cores (Mid 2014 MacBook Pro).

After reading some documentation I found that I could change this behaviour through torch.set_num_threads(n), althogh after setting the amount of threads to 4, for instance, torch.get_num_threads() still returns 1.

Following, I reinstalled Pytorch through conda and then I noticed that the CPU usage moved to about 270% (ie: about three cores are being used), which is a little better, but still torch.get_num_threads() returns 1. Next I tried compiling Pytorch from source following the instructions on Github, but I observed the same behaviour as before regarding the amount of threads returned. My question is: is there a way to make torch.get_num_threads() reflect the actual amount of threads that are going to be used by the model?

I have seen some issue posts on Github discussing about setting MKL or OMP options, but Iā€™m not sure how to use these or even how to install MKL properly (to my understanding this is already done when installing Pytorch through conda).

Some additional details:

  • pytorch version: 0.4
  • python version: 3.6
2 Likes

I also have a question.
From seeing the source code of 1.0, torch.get_num_threads() only informs OPENMP Value (Not consider MKL value).
Is it intentional?