Control maximum size of shared memory tensor


While trying to debug what seemed to be a deadlock I realized that I could not share a tensor to sub processes if it was more than 128kb.

# Works fine
buffer = ch.zeros(32768).share_memory_()

# sub-processes hang when try to read/write the tensor (even protected by a lock)
# (Main process can read it fine)
buffer = ch.zeros(32769).share_memory_()

Is there a configuration option to allow me to allocate more shared memory in a single tensor ?

Thank you for your help

It seems to be a bug since it was working in version 1.4.0. Filed a bug report here: Operating on more than 128kb of shared memory hangs · Issue #58962 · pytorch/pytorch · GitHub