Deploy model with pytorch custom operator to onnx to tensorrt?

Is there possible to deploy a model on Nvidia embedded platform (For example: Nividia AGX xavier , Nvidia drive px2, Nvidia dirve xavier) ,this model contains Sparse convolution pytorch custom operator C++ implement?
In this way : pytorch pth model → onnx model → tensorrt engine.
So the problem becomes to sparse-conv-opt pytorch custom opt → onnx custom opt → tensorrt plugin.
So far as I know, pytorch custom opt → onnx custom opt is supported by offical, but the last step: onnx custom opt → tensorrt plugin.
Is there anybody deployed model in this way successfully? Not limited to sparse convolution.

Hi @li_bi since this is an ONNX specific question your best bet is to open a github issue on an ONNX repo like GitHub - onnx/onnx: Open standard for machine learning interoperability