I need to parallelize the training of a ANN using n cores of a CPU not GPU, is that possible to achieve this in Pytorch, all the parallelization examples that I have seen here use GPU’s…
1 Like
You could use torch.set_num_threads(int)
to define the number of threads used for intraop parallelism and torch.set_num_interop_threads(int)
for interop parallelism (e.g. in the JIT interpreter) on the CPU. Also, the env vars OMP_NUM_THREADS
and MKL_NUM_THREADS
might be useful.
More information are given in the CPU threading docs.
1 Like