Mark_dynamic does not work when the dimension is 1

Dear experts. I am trying to use torch.compile and avoid recompiling due to dynamic shapes, but it always seem to cause recompile when the dynamic dimension happens to be 1. Below is my minimal repro code:

import torch
@torch.compile
def f(x):
  return x.sum()

x = torch.rand([4, 2, 2, 2])
#torch._dynamo.decorators.mark_unbacked(x, 0) # This works
#torch._dynamo.mark_dynamic(x, 0) # This doesn't work
torch._dynamo.mark_dynamic(x, 0, min=1, max=4) # This doesn't work either
f(x)
f(torch.rand([1, 2, 2, 2]))  # This line causes recompile!
#f(torch.rand([2, 2, 2, 2])) # If I do this instead (2 instead of 1 in the 0-dim), it works nicely

Normally, calling torch._dynamo.mark_dynamic(x, 0) would work nicely, when the 0-th dimension changes and is always larger than or equal to 2.
However, when the dimension changes to 1, it causes recompile. Using mark_dynamic() with explicitly setting the min=1 doesn’t work either.
The only way I found to be working is to set torch._dynamo.decorators.mark_unbacked(x, 0). From the document, I wasn’t sure what this actually does, but it seemed like this is not very recommended (here: https://docs.pytorch.org/docs/stable/user_guide/torch_compiler/torch.compiler_dynamic_shapes.html#mark-unbacked-tensor-dim).

Is this a bug, or a known issue? What is the recommended fix? mark_unbacked, or shall I rewrite my code so that the dimension size value is never 1, or is there some other way out?
Thank you for any suggestions!