Detectron2 inference on CPU in C++

Hi all,
I would to do inference with a detectron2 mask_rcnn_X_101_32x8d_FPN_3x.yaml model on CPU in C++ with Visual Studio 2019 please ?
Thank you
Best regards

I saw solutions for C++ detectron2 inference with Torchscript but all are with GPU, is it possible to use it with CPU ?

I assume in your code you are explicitly moving the data and model to the GPU via .to(torch::kCUDA) so remove this op and the training should run on the CPU by default.

And for take the model I cited please ?