How to limit the number of threads in libtorch

With pytorch, we can use torch.set_num_threads() to control CPU parallelization, but I don’t know what torch.set_num_threads() corresponds to in libtorch or how to control cpu parallelization in libtorch?Does anyone know?

I have tried to use omp_set_num_threads(), but there’s no effect.(In the test of inference with resnet on cpu, using set_num_threads in python to set differenct number of threads can get differenct cpu usage)