Torch.multiprocessing.queue periodically raises runtime error: unable to open shared memory object </torch_7****_582458219> in read-write mode at ...build/torch/lib/TH/THAllocator.c:226

I’m periodically sending between processes a tuple consisting a time.time() float and some pytorch tensors/variables via a torch.multiprocessing.queue.Queue(1) object.
queue.get() is in a while loop that immediately grabs tuple once it’s put on queue by other process.

PyTorch version:
'0.1.11+dfa2d26’
with no Cuda on default 8-core 2016 macbook pro.

When I use python’s default multiprocessing, no error occurs.

When I use torch.multiprocessing, this error occurs every ~10 minutes:

Traceback (most recent call last):
File “/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/multiprocessing/queues.py”, line 241, in _feed
obj = ForkingPickler.dumps(obj)
File “/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/multiprocessing/reduction.py”, line 50, in dumps
cls(buf, protocol).dump(obj)
File “/usr/local/lib/python3.5/site-packages/torch/multiprocessing/reductions.py”, line 108, in reduce_storage
metadata = storage.share_filename()
RuntimeError: unable to open shared memory object </torch_77083_582458219> in read-write mode at /private/var/folders/h6/8bw_02_n0vs0h0d3qr7w67m80000gn/T/pip-0_y0f7tx-build/torch/lib/TH/THAllocator.c:226

I meet the same issues as yours, looking forward to any help …
here is my problems:

Epoch: [0][41/41] Time 0.256 (0.407) Data 0.000 (0.019) Loss 0.552 (0.565) Prec 11.36% (3.03%)
Traceback (most recent call last):
File “examples/triplet_loss.py”, line 221, in
File “examples/triplet_loss.py”, line 150, in main
File “build/bdist.linux-x86_64/egg/reid/evaluators.py”, line 118, in evaluate
File “build/bdist.linux-x86_64/egg/reid/evaluators.py”, line 21, in extract_features
File “/usr/local/lib/python2.7/dist-packages/torch/utils_v2/data/dataloader.py”, line 301, in iter
File “/usr/local/lib/python2.7/dist-packages/torch/utils_v2/data/dataloader.py”, line 163, in init
File “/usr/local/lib/python2.7/dist-packages/torch/utils_v2/data/dataloader.py”, line 226, in _put_indices
File “/usr/lib/python2.7/multiprocessing/queues.py”, line 390, in put
File “/usr/local/lib/python2.7/dist-packages/torch/multiprocessing/queue.py”, line 17, in send
File “/usr/lib/python2.7/pickle.py”, line 224, in dump
File “/usr/lib/python2.7/pickle.py”, line 286, in save
File “/usr/lib/python2.7/pickle.py”, line 548, in save_tuple
File “/usr/lib/python2.7/pickle.py”, line 286, in save
File “/usr/lib/python2.7/pickle.py”, line 600, in save_list
File “/usr/lib/python2.7/pickle.py”, line 633, in _batch_appends
File “/usr/lib/python2.7/pickle.py”, line 286, in save
File “/usr/lib/python2.7/pickle.py”, line 600, in save_list
File “/usr/lib/python2.7/pickle.py”, line 633, in _batch_appends
File “/usr/lib/python2.7/pickle.py”, line 286, in save
File “/usr/lib/python2.7/pickle.py”, line 562, in save_tuple
File “/usr/lib/python2.7/pickle.py”, line 286, in save
File “/usr/lib/python2.7/multiprocessing/forking.py”, line 67, in dispatcher
File “/usr/lib/python2.7/pickle.py”, line 401, in save_reduce
File “/usr/lib/python2.7/pickle.py”, line 286, in save
File “/usr/lib/python2.7/pickle.py”, line 548, in save_tuple
File “/usr/lib/python2.7/pickle.py”, line 286, in save
File “/usr/lib/python2.7/multiprocessing/forking.py”, line 66, in dispatcher
File “/usr/local/lib/python2.7/dist-packages/torch/multiprocessing/reductions.py”, line 113, in reduce_storage
RuntimeError: unable to open shared memory object </torch_29419_2971992535> in read-write mode at /b/wheel/pytorch-src/torch/lib/TH/THAllocator.c:226
Traceback (most recent call last):
File “/usr/lib/python2.7/multiprocessing/util.py”, line 274, in _run_finalizers
File “/usr/lib/python2.7/multiprocessing/util.py”, line 207, in call
File “/usr/lib/python2.7/shutil.py”, line 239, in rmtree
File “/usr/lib/python2.7/shutil.py”, line 237, in rmtree
OSError: [Errno 24] Too many open files: ‘/tmp/pymp-QoKm2p’