ONNX export - do not trace tensor size reading

I am using Torch 1.3.1 and trying to export a model to ONNX.

Implementing an FPN-like module, I use an Upsample operation whose target size is determined by the previous feature map size.

The ONNX exporter traces the operation of reading the previous tensor size, so instead of taking the upsample target size as a constant, it translates the size reading into a messy ONNX graph. This is really unneeded, since given the fixed input size, the upsampling size is fixed too.

I even get this warning, when trying to get rid of the tracing using detach and converting into tuple. It still traces it:

TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!

How can I make the ONNX exporter not trace that operation and use the upsample size as a constant?

Just found an ugly workaround: after converting the tensor’s HW dimensions into a tuple, which still doesn’t get rid of the tracing, I now create another tuple that consists of the same values plus 0.2. Then I floor those values. That gets rid of the tracing, so the upsampling resize dimensions are treated as constants rather than ONNX operations, while preserving their values. :stuck_out_tongue: