Nested torch.compile-d function calls with different options / CUDA graph options

How does torch.compile options inferfere when one torch.compile-decorated function invokes another torch.compile-decorated functions (and the passed options?

E.g. something like this:

@torch.compile(mode='reduce-overhead', fullgraph=True)
def f(x):
  return x

@torch.compile
def g(x):
  for k in range(10):
     torch._dynamo.graph_break()
     x = f(x)
  return x

Will the nexted f() still be executed with reduce-overhead as expected? Or will the outer torch.compile options prevail?

Basically looking to have some inner computations be fullgraph-ed and cuda-graph’ed and have the outer code still torch.compile’d with less strict options

Thanks!