I am using Torch 1.3.1 and trying to export a model to ONNX.
Implementing an FPN-like module, I use an
Upsample operation whose target size is determined by the previous feature map size.
The ONNX exporter traces the operation of reading the previous tensor size, so instead of taking the upsample target size as a constant, it translates the size reading into a messy ONNX graph. This is really unneeded, since given the fixed input size, the upsampling size is fixed too.
I even get this warning, when trying to get rid of the tracing using
detach and converting into tuple. It still traces it:
TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
How can I make the ONNX exporter not trace that operation and use the upsample size as a constant?