Hi, how do we deal with avoiding recompiles? I am trying to compile a model with dynamic input shapes and it triggers recompiles for every iteration and eventually timing out.
model = torch.compile(model, fullgraph=True, mode="reduce-overhead", dynamic=True)
recompilation log
[rank1]:V0501 19:01:33.888000 146171 site-packages/torch/_dynamo/guards.py:2791] [0/7] [__recompiles] Recompiling function forward in /home/jovyan/model.py:129
[rank1]:V0501 19:01:33.888000 146171 site-packages/torch/_dynamo/guards.py:2791] [0/7] [__recompiles] triggered by the following guard failure(s):
[rank1]:V0501 19:01:33.888000 146171 site-packages/torch/_dynamo/guards.py:2791] [0/7] [__recompiles] - 0/6: tensor 'L['batch']['model_inputs'].data['input_ids']' size mismatch at index 1. expected 67, actual 50
packages/torch/_dynamo/guards.py:2791] [0/7] [__recompiles] Recompiling function forward in /home/jovyan/model.py:129