Can pytorch by-pass python gil?

Hi,

Suppose there is an occasion where I use python multi-thread to do inference with two models. That is thread 1 use model 1 to infer picture 1, and thread 2 use model 2 to infer picture 2. Suppose python gil switches among the two threads every 2ms, and the inference time is 200ms. Will the whole time of running the two models concurrently be 400ms or less than that ? Will the inference on one thread be interrupted and turned into waiting mode when another thread is active, or will it not be affected?

2 Likes

Hi,

We do release the GIL as soon as we get out of python code. So for pytorch ops, it’s more or less all of them.
The backward, unless you implement custom Functions, will run completely out of the GIL.

2 Likes