I know this is not a pytorch issue, but since onnx model would gain a huge performance if using tensorrt for inference, must many people have tried this.
I want ask I have generate a mobilenetv2.trt model with onnx2trt tool, how do I load it in tensorrt?
Have anyone could provide a basic inference example of this?
Most usage I got is loading model directly from onnx and parse it with NvOnnxParser, since we generate a trt model, I think this step is unessary…
I will give him a bitcion if anyone solve my question