Synchronization for sharing/updating shared model state dict across multi-process

Okay, I found there’s already well documented answer at https://pytorch.org/docs/stable/notes/multiprocessing.html.
And also with hogwild!-spirit, it’s fine to overwrite gradient from other process.

So the only thing I wonder now is that how the updating shared model parameter could be done without any lock mechanism. Are the add_ or addcdiv_ atomic operation?

2 Likes