Could models under multiprocessing be inconsistent while sharing?

A model is usually composed of many tensors of parameters. While the model is being updated e.g. via optimizers etc. this process is not likely to be atomic. For such a non-atomic operation under memory sharing among many processes, I am normally anxious about the inconsistency of the model state during such operation i.e. at a moment, a process might get a partially updated model while the rest is from the last iteration.

Does Pytorch have precautions for such a scenario? Or, in fact, the inconsistency of the state is usual but is a non-issue?