Locking a parameter

I have a big giant tensor shared between two processes/streams.
I want to create a simple lock,
it should combine both with (forward(), backward(), step()).
For forward and step I have a simple solution: overriding the toch.nn.Parameter class.
The problem is in backward/autograd.
We have a post-backward hook but no pre-backward hook.

Is there any possibility doing what I want in Pytorch?

Hi,

If you only read it, you will be fine.
If you have to write, you should make sure no reads happen at the same time but that should be much simpler to enforce.

For forward and step I have a simple solution: overriding the toch.nn.Parameter class.

I am curious how you do that?

I mean readers-writers lock. So for for disallowing reads I have to lock something when read start.

Its optimal to do it just when we are about to read the parameter during the backward pass
(so I mentioned this “pre-backward-hook”)

locking at the begging of autograd.backward is the easy solution, but its wasteful right.
can split the backward to 2 calls if the model is sequential, is that what you meant by “easy to enforce”?

I actually thought on overriding toch.nn.Parameter class with composition, and automate it somehow (e.g with with setatter/getattr, or maybe there is more elegant pythonic solution in stackoverflow :slight_smile: ) .

Most of the autograd logic (in particular, .grad field update) happens in cpp. And so out of the scope of nn.Parameter() :confused: These get/setattr won’t be called when the autograd updates the field in cpp :confused:

If the only part that writes is the end of the backward, a solution would be to use autograd.grad and do the copy into the .grad field manually (with proper locking).

But I don’t think there is any way to do general locking on a Tensor. You will have to do it for each use one by one.