Hey, I am getting the below error in my testing which I have various shapes of inputs run through my model.
msg = 'cache_size_limit reached'
def unimplemented(msg: str) -> NoReturn:
assert msg != os.environ.get("BREAK", False)
> raise Unsupported(msg)
E torch._dynamo.exc.Unsupported: cache_size_limit reached
../../anaconda3/lib/python3.9/site-packages/torch/_dynamo/exc.py:193: Unsupported
I understand this happens when recompilation happens. I could get around it by setting a higher value.
But my questions are:
- What’s the expected best practice if I am hosting in a production env? Should I just set this value to a high value?