Pytorch Model Inference with CPU Parallelization

Hi,

I am trying to perform inference on pytorch model saved using torch.jit.script(). Is there a C++ example where model inference is parallelized on multiple CPUs? I am using mpirun to parallelize the C++ code, and when I run C++ code with mpirun -np ${n}, it will launch n number of iterations whenever I call a Torch library function. Is there page like this one: Optional: Data Parallelism — PyTorch Tutorials 2.5.0+cu124 documentation but for CPU parallelization instead of GPUs?

Max

Sincerely,
Max