How to not use shared memory in PyTorch multiprocessing

When I use multiprocessing on a remote server, I got some error messages like:

File "/job/.local/lib/python3.7/site-packages/torch/multiprocessing/reductions.py", line 315, in reduce storage
fd, size = storage._share_fd_()
RuntimeError: unable to write to file </torch_1_3660435083>

What works: 1) run it locally with same # of processes/workers; 2) run it on the server with fewer processes/workers.
This seems that shared memory is not enough.

Since I use ‘spawn’ method to initialize a process, I actually did not use any model.share_memory().

What might be the problem and solution?
Anything else that I should do to make sure I do not use shared memory?

(I can not set the shared memory size on this remote server…)

Thanks very much in advance!