Forward pass working in PyTorch but not ONNX

I am looking for generic troubleshooting advice others have found helpful when debugging PyTorch to ONNX exports.

Following the official PyTorch tutorial, I converted a 3rd party model to ONNX (see this Github discussion for details and code to reproduce). The strange thing is, my dummy image batch works fine for forward prop in the PyTorch version of the model, but when I try to initialize an ONNXRuntime session, it gives the following error

ort_session = onnxruntime.InferenceSession(path)  <-- raised by this Python line
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : 
FAIL : Load model from tf_efficientdet_lite0.onnx failed:Node (Concat_42) Op (Concat) 
[ShapeInferenceError] All inputs to Concat must have same rank

So far, I have used Netron’s visualization of the ONNX model to identify the Concat_42 node in question. However, I am struggling to determine what line of code in the PyTorch implementation this node corresponds to. Has anyone solved a problem like this before? I imagine there are clues to look for in the verbose output of torch.onnx.export that would help identify the culprit in the PyTorch model.

On principle, it seems odd that forward prop can work in PyTorch but have shape compatibility issues in ONNX. Given this, I suspect there is a portion of the model definition (if-statements, loops, etc) that does not trace well, and therefore it breaks when accepting an input tensor other than the one that produced the trace. However, I use the same dummy tensor for the trace and ONNXRuntime session, so this explanation does not seem to hold.