Prevent Multithreading in PyTorch c++

I have a simple (fairly small) PyTorch model (a feedforward neural network with 3 hidden layers of 64 elements) that I use a lot for inferencing. There are other processes, and I want inference to only happen on one thread. Inferencing (calling model->Forward(v):wink: happens on CPU thread.

This thread somehow spawns 16 subthreads, which I don’t like. (There is lot of overhead, and cpu usage on other threads causes tasks there to slow down.) So I call:

torch::set_num_threads(1);
torch::set_num_interop_threads(1);

which should prevent multithreading. Somehow this doesn’t work, and still 16 threads are created. Any idea what is causing this? Is this a bug?

1 Like

I did some more testing. So the program I use has inferencing on relatively small vectors on all threads. Above problems are with 1.10 (stable).

When I use PyTorch LTS (1.8.2), then the problems go away: When I spawn a single thread, PyTorch keeps at a single thread, i.e. it doesn’t spawn additional threads beyond the one created by me. As a consequence, with multithreading I get a huge speed-up.

Anyhow, for my use case it would be very helpfull if PyTorch 1.10 gains a switch to really turn off multithreading.

1 Like