Using torch.Tensor over multiprocessing.Queue + Process fails

This is an unfortunate result of how Python pickling handles sending file descriptors. (We send tensors via shared memory instead of writing the values to the queue). The steps are roughly:

  1. Background process sends token mp.Queue
  2. When the main process reads the token, it opens a unix socket to the background process
  3. The background process sends the file descriptor via the unix socket
5 Likes