Hi
I want to know that parallelism in pytorch supports across multiple CPUs (Distributed Memory processing across nodes- CPUs) because I don’t have GPUs ?
Thank you
Hi
I want to know that parallelism in pytorch supports across multiple CPUs (Distributed Memory processing across nodes- CPUs) because I don’t have GPUs ?
Thank you
PyTorch uses Intel MKL, which attempts optimizations to utilize CPU to its full capacity.
GPU is still 10-30x faster than CPU so you may want to get it if you are planning to do this long term.
in our next major release, will will support distributed parallelization, which will enable you do parallelize over CPUs across nodes as well.
Hello,
this is a very useful thread. I was wondering whether you’re roughly able to say when the release supporting distributed parallelization may be expected as I’m interested in using it? Or has it already been released and I missed it?
Any news on this one? Thanks
Hi,
Is there any new update on this in the newest version 1.0 ?
Thanks!
we have had torch.distributed since August 2017.
See:
https://pytorch.org/docs/stable/distributed.html
https://pytorch.org/docs/stable/nn.html#distributeddataparallel
https://pytorch.org/docs/stable/nn.html#distributeddataparallelcpu