Is Distributed Data Parallel equivalent to "Defence Against the Dark Arts from Harry Potter"

Hi Everyone,
So I am trying to run distributed data-parallel on single machine multiple GPU. I have consulted various tutorials from multiple resources and still facing issues.
I have asked multiple times here at various intervals of time, but no one seems to reply, is it some prohibited technology?
Last time, I asked this - Use Distributed Data Parallel correctly and I have asked multiple times from various accounts too.
I am a beginner, if anyone could at least point me in general direction then it would help me a lot.
Thanks.
Edit: I have followed the below links -
https://pytorch.org/docs/master/notes/ddp.html#example
https://pytorch.org/docs/stable/distributed.html
https://github.com/yangkky/distributed_tutorial/blob/master/src/mnist-mixed.py
https://pytorch.org/tutorials/intermediate/ddp_tutorial.html
https://yangkky.github.io/2019/07/08/distributed-pytorch-tutorial.html

Hey @bing, thanks for the question, and sorry about the delay. Could you please add the “distributed” tag to DP/DDP/RPC/C10D related questions in the future? The PT Distributed team is actively monitoring that tag, but we might miss uncategorized questions.

1 Like

I added my response to your previous question Use Distributed Data Parallel correctly, let’s discuss there.