Sharing optimizer between processes

I am wondering if it is possible to share optimizer between different threads. To be specific, when optimizer.step() is applied, modified state of the optimizer should be available for all processes.

i cant think of a way to do this, because the optimizer also has python scalars.

What are you trying to do?

I am trying benchmark A3C algo with shared RMSProp optimizer vs seperate one for each thread, as described in this paper, p11

No, I think most of the optimizer state is kept in Tensors, so it should be possible to share it. I’m not 100% now, I’ll need to take a look at RMSprop and can confirm it tomorrow.

Thanks. I will also look at the code and see if I can find some hack.

Glad to know that you were looking into this.
Is the current optimizer thread safe?
Thanks!

1 Like

It’s not. You need to guard it with a mutex yourself

2 Likes

Just a quick follow-up for those wanting to implement A3C.

A3C uses a Hogwild! update style, which means updates are made lock-free. Workers possibly overwrite each other updates at times, but that’s OK.

The only thing needed for Adam or RMSprop to be shared with other processes is override the classes and call the share_memory method on the important variables. One can also cleverly work around the step counter which is not a tensor, so it can’t be shared.

But other than that, it is doable, and not too hard.

1 Like