Does Pytorch support parallelism across multiple CPUs?

Hi
I want to know that parallelism in pytorch supports across multiple CPUs (Distributed Memory processing across nodes- CPUs) because I don’t have GPUs ?

Thank you

7 Likes

PyTorch uses Intel MKL, which attempts optimizations to utilize CPU to its full capacity.

GPU is still 10-30x faster than CPU so you may want to get it if you are planning to do this long term.

1 Like

in our next major release, will will support distributed parallelization, which will enable you do parallelize over CPUs across nodes as well.

8 Likes

Hello,
this is a very useful thread. I was wondering whether you’re roughly able to say when the release supporting distributed parallelization may be expected as I’m interested in using it? :slight_smile: Or has it already been released and I missed it?

1 Like

Any news on this one? Thanks

1 Like

Hi,
Is there any new update on this in the newest version 1.0 ?
Thanks!

we have had torch.distributed since August 2017.

See:

https://pytorch.org/docs/stable/distributed.html
https://pytorch.org/docs/stable/nn.html#distributeddataparallel
https://pytorch.org/docs/stable/nn.html#distributeddataparallelcpu

1 Like