GPU model on shared memory?

Is there simple way to share model on GPU between processes?

I read https://pytorch.org/docs/stable/multiprocessing.html and https://pytorch.org/docs/stable/notes/multiprocessing.html but they only mention about CUDA shared memory in unit of tensor(not module) and CPU model.

Do we have simple method as model.share_memory() for GPU model?

Hi,

Only the Tensors are actually stored on the GPU, the rest of the structure of your model is stored on CPU. So nothing will be copied on the GPU if you share the models.