How does share_memory work for multiprocessing

Hi, I was bit exploring the model parameter sharing in multiprocessing. But the below code suggests that model parameters were not shared.

Here I am running 2 parallel process with locks to ensure that nothing overlaps. As expected the process 2 should be having the gradient from Process 1, since the memory is shared

For both processes the highligted “print(model.l2.weight.grad)” prints None, instead of printing accumulated gradient( from process 1), it prints ‘None’

def run(model,l):
for t in range(1):
# Forward pass: compute predicted y by passing x to the model.
l.acquire()
y_pred = model(x)

    # Compute and print loss.
    loss = loss_fn(y_pred, y)
#     optimizer.zero_grad()  --- Commented to not make grad 0
    **print("B4 val.....")**

** print(model.l2.weight.grad)**
loss.backward()
optimizer.step()
print(“Afeer val…”)
print(model.l2.weight.grad)
l.release()

m=MyModel(N, D_in, H, D_out)
m.share_memory()
l= mp.Lock()
processArr=[]
for i in range(2):
p= mp.Process(target=run,args=(m,l))
processArr.append§
p.start()

for p in processArr:
p.join()

1 Like