Torch-TensorRT + FX Frontend with torch.fx.wrap

Hi,

I have a workflow that uses torch.fx and it works great. It did, however, require wrapping quite a few functions in torch.fx.wrap to declare them as leafs to not be traced through (for dynamic control flow, specifically for Swin-Transformer with different input sizes that pads various things to work with any input size). What is the implication of this when I try to create a TensorRT version of the model via the Torch-TensorRT FX Frontend?

Specifically,

  1. Will Torch-TensorRT still be able to perform optimizations for code wrapped with torch.fx.wrap or will it split the graph and force torch to execute those portions?

  2. Will the output be any different than if I use the JIT scripting frontend instead of FX, and then pass this scripted model to Torch-TensorRT, so I don’t have to worry about “wrapped”/skipped functions? I don’t want to bother adding this workflow if Torch-TensorRT is just going to do this for me internally, by scripting the FX version (if that would even expose the wrapped function internals).

Thank you!
-Collin