I am trying to implement multiprocessing environments for parallel rollouts.
model.share_memory() and then use torch.multiprocessing to start several processes.
In each process, I use
model(input) for feed-forward computation.
My question is:
Are the feed-forward in all the processes running in parallel (like parallel GPU computation)? Or actually the model is only used by one process at one time and all the processes share the model via some locks?