Onnx2trt generates trt model, how to load in tensorrt

I know this is not a pytorch issue, but since onnx model would gain a huge performance if using tensorrt for inference, must many people have tried this.

I want ask I have generate a mobilenetv2.trt model with onnx2trt tool, how do I load it in tensorrt?

Have anyone could provide a basic inference example of this?

Most usage I got is loading model directly from onnx and parse it with NvOnnxParser, since we generate a trt model, I think this step is unessary…

I will give him a bitcion if anyone solve my question

Hii, I have ran Object Detectors using TensorRt. You may contact me at namanbhayani.bitspilani@gmail.com if you need help.

@Naman_Bhayani Which model are u using?

I try to send you an email, helpfully you can guide me somewhere

Hey @jinfagang

I have created a repo for the same. Can you check if this help?