Is it possible to define a custom back-end wrapped with aot_autograd
which does not use fake tensors? Solely considering forward compilation, a custom back-end can be registered as follows:
def my_forward_compiler_fn(gm, tensor_inputs):
pass
register_backend(name="my_backend", compiler_fn=aot_autograd(fw_compiler=my_forward_compiler))
In my_forward_compiler, tensor_inputs
are of type FakeTensor
. A human-readable table of the GraphModule can be printed using gm.graph.print_tabular()
, however, there appears to be no way to access the underlying data of the arg tensors.