Using multiple CPU cores for training

Is torch.nn.parallel.DistributedDataParallel only applicable to GPU and not to CPU with multi cores?