Pytorch multiprocessing question

  1. Is there a code example on sharing tensor among multiple processes using process pool? The doc above does not specify how to other than saying we need to move tensor to “shared memory” and use “queue”.

  2. Also, if I want all the processes to have access at the same time because it is garunteed that the processes only do read and not write, using semaphores and queue seems to be unnecessary. Is there a workaround to disable semaphore and queue when sharing tensors?

Don’t know of some example code. But if you use a queue per process in the pool and just send the same shared tensor to every single queue, all processes should have an identical tensor with the storage mapped to the same memory, so they’ll all see all writes to that storage. This should also address the second question you post. If you don’t want coordination between processes because the tensor is read only, you just send it, and access it, and not do anything else (or I’m missing something here, but this is my understanding here).