I need to parallelize the training of a ANN using n cores of a CPU not GPU, is that possible to achieve this in Pytorch, all the parallelization examples that I have seen here use GPU’s…
You could use
torch.set_num_threads(int) to define the number of threads used for intraop parallelism and
torch.set_num_interop_threads(int) for interop parallelism (e.g. in the JIT interpreter) on the CPU. Also, the env vars
MKL_NUM_THREADS might be useful.
More information are given in the CPU threading docs.