Hello, I have trained a model with pytorch, but how to inference using c++.
It’s a detection model similar as RFBNet and SSD, so is there anyone try to load model in C++?
Hello, I have trained a model with pytorch, but how to inference using c++.
It’s a detection model similar as RFBNet and SSD, so is there anyone try to load model in C++?
To use a PyTorch model for inference in C++, we need to convert the model to TorchScript first. This is a good tutorial: https://pytorch.org/tutorials/advanced/cpp_export.html.
I know, I have successfully trace a classification model to C++, not sure about the detection models.
BTW, does there any comparation of C++ inference speed with libtorch and Caffe or Tensorflow? Who is faster in terms of same models architecture?
Yes, but it doesn’t get any speed up (just a littel).
I am going using onnxruntime as inference engine instead. it can using intel cpu mkl as acceleration or using cuda tensorrt acceleration.
Did you face any difficulties in exporting the detection model into C++?
did everything run smoothly for you?