Problem on Variable.grad.data?

Thank you~ For the safe issue, i noticed that DM’s paper explicitly said they don’t put a lock on the shared weights.

For your solution to the shared grad to pytorch-a3c.

def ensure_shared_grads(model, shared_model):
    for param, shared_param in zip(model.parameters(), shared_model.parameters()):
        if shared_param.grad is not None:
            return
        shared_param._grad = param.grad

Will this code restrain the shard_model grad only being bounded with one local_model?
Cause share_model.grad will not be None after running this function for once. And other threads of local_model won’t be able to change _grad anymore. Or _grad will not be accesible to other threads?

2 Likes