I am still a little confused. Let’s say I have 2 GPUs and would like to run 2 processes on each GPU. All 4 of the processes are independent.
May I ask some questions please:
- There is no need to use
share_memory()
, right? - I just need to clone 2 models for each GPU and run multiprocessing of pytorch. Is that correct?
- Is there anything else I need to do?