I am using a pytorch model (GitHub - MhLiao/MaskTextSpotterV3: The code of "Mask TextSpotter v3: Segmentation Proposal Network for Robust Scene Text Spotting") and I wanted to export it to onnx. I already asked on onnx github forum, and they suggested to ask the question here.
The issue is, we have some post-processing between some layers. For example, we perform post-processing on the outputs of the RPN boxes before feeding the proposals to the heads. I am having then a lot of warnings, to give you an example, we want to convert tensors to numpy array to have image computation using opencv, triggering then :
TracerWarning: Converting a tensor to a NumPy array might cause the trace to be incorrect.. That case can be found in the class
SEGPostProcessor(torch.nn.Module) in the repository I cited above.
Or another example, sometimes we have to iterate over a Tensor, triggering
RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect.
Then the conversion to onnx is giving many warnings, and eventually the onnx runtime fails without surprise.
Here’s the runtime failure log :
2021-01-19 13:41:54.381600130 [E:onnxruntime:, sequential_executor.cc:333 Execute] Non-zero status code returned while running Gather node. Name:'' Status Message: indices element out of data bounds, idx=94375546868464 must be within the inclusive range [-1,0] Traceback (most recent call last): File "tools_onnx/test_net_onnx.py", line 263, in <module> main() File "tools_onnx/test_net_onnx.py", line 233, in main ort_outs = ort_session.run(None, ort_inputs) File "/home/ubuntu/anaconda3/envs/masktextspotter37/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 124, in run return self._sess.run(output_names, input_feed, run_options) onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Gather node. Name:'' Status Message: indices element out of data bounds, idx=94375546868464 must be within the inclusive range [-1,0]
One of my idea was to split the model into many sub-models, and make the post-processing between these models, but it’s very fastidious and I am convinced there is a better solution.
If you have any information about the good way to deal with these kind of steps, it would be really helpful. Thanks in advance.