Is there simple way to share model on GPU between processes?
I read https://pytorch.org/docs/stable/multiprocessing.html and https://pytorch.org/docs/stable/notes/multiprocessing.html but they only mention about CUDA shared memory in unit of tensor(not module) and CPU model.
Do we have simple method as model.share_memory()
for GPU model?