How to share torch models across python processes

I currently want to initialize a torch model in the main process and want to pass it to child thread (due to model size is large and thus initialization is long), but the IPC overhead is too large.
I try to use model.share_memory() but it seems that there is no effect.
My question is that is there any way to overcome this. The final objective is to use the model in child thread without bearing from long model initialization