Share the CUDA context created by pytorch

Hi,

Our program runs both Pytorch and another library (taichi) in the same process. Since both packages use CUDA, this has caused a problem when running the program on a GPU device that’s set to EXCLUSIVE_PROCESS mode. We are thinking about changing taichi to re-use the CUDA context created by pytorch, but wonder if this is a good practice? (Even if pytorch creates one CUDA context per device throughout the execution lifetime, that seems like an implementation detail) If not, is there any recommendation for how to workaround the problem, assuming we have only a single GPU?

Thanks

1 Like

May be you could use the CUDA driver API cuCtxGetCurrent, it shall get the current cuda context.

Thanks for your suggestion :slight_smile: Yeah I’ve tried that and it actually worked (on a simple test case). What I mainly wanted to know is if PyTorch guarantees that the CUDA context is not destroyed throughout the lifetime of the process. Without this guarantee, I think it’s better not to assume that the context is sharable.