Is it able to share memory pool of pytorch among different processes?

Hi there,

I would like to ask that if it is possible that multiple processes on a GPU device can share the same GPU memory address? (like IPC in CPU memory) Is it implemented in PyTorch memory pool? If no, is it possible to do so via modifying the memory pool?