I am working on a Python tool-chain that includes exporting PyTorch models to ONNX and reading them on a client side to process and execute them in a custom inference engine. I’d like to understand if it is possible to cover various use-cases that I have. I’d like to avoid solutions that require modifying the torch or onnx source code, as the toolchain will be used by other people, who will not have the modified source code, just the pip installed torch/onnx packages. My problematic use cases are:
exporting unsupported PyTorch ops to ONNX, such that it can be read back on the client side: so far, I could export unsupported ops using torch.onnx.export(operator_export_type=ONNX_FALLTHROUGH) which does export unsupported ops as they are, but then the client side cannot process them as onnx shape inference (and checker) cannot be run. Is it possible to fix this somehow, by either injecting a shape inference function (could not find such on Python side, only C++ which is not what I need) or by exporting the tensor shapes into the model as well, such that shape inference is not required on the client side?
export python functions as a single op/function into ONNX: I saw in the ONNX IR some sort of (experimental?) support for functions, but could not find a way to export such functions from PyTorch. An example would be exporting a shuffle operation in ShuffleNet as a single op instead of a reshape-transpose-reshape sequence. Is that possible to do?