Hi,
I am using libtorch-cpu version 1.5.1 for loading a model built in Python and then running forward and backward propagation on it. I noticed that these operations are creating multiple Pytorch threads. I tried to control these threads using at::set_num_threads(1). But, that doesn’t seem to work.
Also, is there a way to cleanly close the threads of PyTorch? Because I can see the threads even after the task is complete.
Hi @ptrblck,
Sorry for replying late. I was experimenting with the threads of PyTorch and was trying to write a small code snippet to post here, but that helped me realize that there isn’t any problem as such. If I do inference or training tasks in a separate thread and close the thread after all the tasks are complete, the PyTorch thread pool also exits cleanly.
The problem I was facing was because I was initially testing on a single threaded application which remains active always, and because of that the PyTorch threads were not exiting.