When I compile my custom nn.module, I do not get an error. However when I use the model, I get
W0121 11:29:04.514000 140682235495680 torch/_dynamo/convert_frame.py:357] torch._dynamo hit config.cache_size_limit (8)
W0121 11:29:04.514000 140682235495680 torch/_dynamo/convert_frame.py:357] function: 'torch_dynamo_resume_in_forward_at_1213' (/path_to_my_module/module.py:1213)
W0121 11:29:04.514000 140682235495680 torch/_dynamo/convert_frame.py:357] last reason: L['___stack0'] == 1737455341.032532
W0121 11:29:04.514000 140682235495680 torch/_dynamo/convert_frame.py:357] To log all recompilation reasons, use TORCH_LOGS="recompiles".
How do I tackle this problem, e.g. what are the principles to set the cache_size_limit? Is it a problem when I set it to e.g. 64 with torch._dynamo.config.cache_size_limit = 64?
Or is the problem actually somewhere else?
Where do I set TORCH_LOGS= ‘recompiles’? And does this error message mean, that it recompiles every time?