How can Pytorch share memory among several processes?

According to this, ‘processes have separate memory’. But Pytorch can somehow share memory among several processes, according to this link: ‘Once the tensor/storage is moved to shared_memory (see share_memory_()), it will be possible to send it to other processes without making any copies.’ Why is it possible to share memory among separate memory? Doesn’t it sound like a paradox?

It uses shared memory. Multiple processes can map the same shared memory segment into their own private memory space. The same segment may have a different address in each process, but maps to the same underlying physical memory. Also see https://en.wikipedia.org/wiki/Shared_memory.

Hi pietern,

Does the shared memory you mentioned locate in the CPU? If it is located on the GPU, is it possible to share the GPU memory across jobs?