Hello,
While trying to debug what seemed to be a deadlock I realized that I could not share a tensor to sub processes if it was more than 128kb.
# Works fine
buffer = ch.zeros(32768).share_memory_()
# sub-processes hang when try to read/write the tensor (even protected by a lock)
# (Main process can read it fine)
buffer = ch.zeros(32769).share_memory_()
Is there a configuration option to allow me to allocate more shared memory in a single tensor ?
Thank you for your help