Parallelism via CPU

Hi,
I know that it is not the intended usecase for PyTorch, but I am wondering if it can (and if it does, does it do it automatically) use multiple CPUs if available? Say I have a cluster with 128 CPUs, would there be an equivalent to e.g,. tensorflows

sess = tf.Session(config=tf.ConfigProto(
  intra_op_parallelism_threads=NUM_THREADS))

Hi,

Distributed training is going to be supported in the next release of pytorch, in around 90 days from now.
You can check the progress of the distributed implementation in here.

2 Likes

Sounds awesome! Thanks!