Does `torch::deploy` isolate different interpreters?

I just read torch::deploy code and it uses dlopen multiple python interpreters to avoid GIL of python. However, as I remember, the global memory addresses across multiple dlopened libraries are same.

So if a python interpreter imports a C++ library containing a global symbol A, and another one imports a C++ library also containing a global symbol A, will it conflict? Are global variables in native modules shared across interpreters?

Also, what if a python interpreter just sigsegv? Will other python interpreters keep running or the process will be panic?

1 Like

Interpreter C Loading

When an interpreter loads a custom Python C extension we isolate those symbols with the copy of the interpreter. Since C extensions depend on “global” symbols provided by Python we need to do this otherwise you run into issues. What we don’t do currently do is isolate the symbols of dependencies of those C extensions. That’s basically how we make torch work by loading the Python specific parts in the isolated symbol namespace and then libtorch is shared across all processes.

For most Python C extensions they’re self contained so they’re isolated and shouldn’t have any shared state.

I.e. for typical cases:

Interpreter → import foo._C → loads foo/ (interpreter specific, isolated)

if _C depends on “”

Interpreter → import foo._C → loads foo/ (interpreter specific, isolated) → (shared)


Generally speaking sigsevs are not recoverable in a process. Since Deploy does have some shared state (i.e. libtorch) if it sigsevs it’s not guaranteed to be recoverable so we just crash instead. If you need extra protection from sigsev etc in a shared environment I’d recommend minimizing the number of C extensions to well tested ones when using deploy or running separate processes per model.

Since libdeploy is a library you could add in a custom signal handler to “gracefully” stop an interpreter and handle certain cases of failures but there’s no guarantees of correctness

1 Like