Hi,
Our program runs both Pytorch and another library (taichi
) in the same process. Since both packages use CUDA, this has caused a problem when running the program on a GPU device that’s set to EXCLUSIVE_PROCESS
mode. We are thinking about changing taichi
to re-use the CUDA context created by pytorch, but wonder if this is a good practice? (Even if pytorch creates one CUDA context per device throughout the execution lifetime, that seems like an implementation detail) If not, is there any recommendation for how to workaround the problem, assuming we have only a single GPU?
Thanks