How to disable fake_tensor mode in torch.compile

def _base_backend(gm: torch.fx.GraphModule, example_inputs):
    # Set up the session, context and invocation.
    # Note that we do this on one in-memory module in a few phases:
    #  1. Build it from the FX graph.
    #  2. Run torch MLIR passes to lower it to a suitable form for
    #     input.
    #  3. Run IREE's main compiler.
    #  4. Output to an mmap buffer.
    example_input0 = example_inputs[-1]
    print(type(example_input0))

Above is a code sample in shark-turbine cpu.py, example_inputs is model input and parameters, but it is a faketensor. I want to dump my parameters here but I do know faketensor is no-data designed. So I want to ask is there some way to dump parameters? Disable faketensor mode or some else?

After a lot of check and test, I found that there is no way to dump data from fake tensor, you should jump to the place where you really use the data if you really want to do that.

The internal function maybe_disable_fake_tensor_mode in torch.fx.experimental.proxy_tensor may be useful.