Is it safe to write to a shared memory tensor from multiple processes?

I’ve searched through the Internet and couldn’t find an answer which drives me crazy :open_mouth:

In the Hogwild training example here, why don’t we have to synchronize before the optimizer step that updates the shared model parameters? Does PyTorch take care of making this operation atomic? Where is it specified in the docs then?

Plz help :pleading_face: