Hey, I am considering to run my model on multiple GPU in one machine. And I found these two DataParallel methods in Pytorch Documentation.
Seems easier but somewhat vague.
Seems more specific
What’s the difference between these two pipelines? Which should I choose?
(BTW, I am training a Generative Adversarial Network. Are there some fantastic public Multi-GPUs codes I can refer to?
Thanks in advance~