I was looking for a way that multiple processes can share a single module in GPU. eg. a Process will transfer a module to GPU and then subsequent processes that come online would be able to use the same original module to perform inference, without uploading their own module.
Is there something similar in Pytorch C++ to Pytorch hog-wild example where a model is shared using
model.share_memory()and share it with multiple processes?