Hey, I am considering to run my model on multiple GPU in one machine. And I found these two DataParallel methods in Pytorch Documentation.
-
https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html
Seems easier but somewhat vague. -
https://pytorch.org/tutorials/beginner/former_torchies/parallelism_tutorial.html
Seems more specific
What’s the difference between these two pipelines? Which should I choose?
(BTW, I am training a Generative Adversarial Network. Are there some fantastic public Multi-GPUs codes I can refer to?
Thanks in advance~