Custom C++/CUDA from Torch to ONNX to TRT

I was wondering what the best way to go forward from a Custom C++/CUDA Pytorch operation to onnx and then to TensorRT (I want to end up running realtime on an AGX Xavier).
The custom module is done inline with “Custom C++ and CUDA Extensions”, however the torch.onnx Custom operations section mentions using torch script. Is this the proper way of doing it? Or can I just export/register the forward static method from the custom autograd function or something?

The actual error I get is below
ONNX export failed: Couldn’t export Python operator CorrelationFunction

Aside: the onnx exporter also has issues with grid_sample, but I just added the ATen fallback, is this okay, or is it best to also make this a custom plugin for better performance?

Source code for reference

Cheers

Uhh bump…?

Have you managed to export a model with grid_sampler to onnx?

Yeah, I had another go at it yesterday and I figured out how to correctly define a placeholder. You will still have to add your own runtime implementation for onnx. Since I’m using TensorRT I made an implementation for that (although I hasn’t been validated yet, I only just started working on it again).