How to reuse shared memory when doing multiprocessing

I am under the impressions that if I have some data that is a torch Tensor, and I put it into a multiprocessing queue (from torch.multiprocessing), then that tensor will be copied into shared_memory. If so, how can I avoid the queue from making shared memory copies. In other words, how do I reuse that shared_memory?