Sharing the torch::jit::script::Module with other processes

I was looking for a way that multiple processes can share a single module in GPU. eg. a Process will transfer a module to GPU and then subsequent processes that come online would be able to use the same original module to perform inference, without uploading their own module.

Is there something similar in Pytorch C++ to Pytorch hog-wild example where a model is shared using model.share_memory()and share it with multiple processes?

This is likely a feature we can discuss, and I’d suggest opening a feature request at https://github.com/pytorch/pytorch/issues.