Hi all, I am trying to implement the vanilla hogwild using pytorch and found an example in pytorch examples. But I am wondering, is share_memory() a lock_free to write and read?
Because vanilla hogwild is lock-free asynchronous weight update method.
torch.manual_seed(args.seed)
model = Net()
model.share_memory() # gradients are allocated lazily, so they are not shared here
processes = []
for rank in range(args.num_processes):
p = mp.Process(target=train, args=(rank, args, model))
p.start()
processes.append(p)
for p in processes:
p.join()
@Soumith_Chintala Could you please provide some info about this?