Does share_memory model is lock free?

Hi all, I am trying to implement the vanilla hogwild using pytorch and found an example in pytorch examples. But I am wondering, is share_memory() a lock_free to write and read?
Because vanilla hogwild is lock-free asynchronous weight update method.

torch.manual_seed(args.seed)

    model = Net()
    model.share_memory() # gradients are allocated lazily, so they are not shared here

    processes = []
    for rank in range(args.num_processes):
        p = mp.Process(target=train, args=(rank, args, model))
        p.start()
        processes.append(p)
    for p in processes:
        p.join()

@Soumith_Chintala Could you please provide some info about this?

2 Likes

hello, i have the same problem right now, i don’t understand the right way ‘share_memory()’ work;
to now i have splitted my dataset and given every single part to a different process with the same model, at the end i want to call the optimizer (assuming that every single process have called the backpropagation on its data), so having the updated weight w.t.r. to the sum of the gradient (every process calling backward may sum the current grad with one’s).