How to handle FunctionalTensor in C++ backend

I am using a custom torch backend written in C++ for the ORT dispatch key.

I noticed that torch dynamo creates a FunctionalTensor with device set to ORT and calls the source_Storage_storage_offset op on this tensor. But my backend is unable to handle this. Do you know how the C++ backend is supposed to implement ops for FunctionalTensors?

python call stack:

alias_non_inplace_storage (torch\utils\
_correct_storage_aliasing (torch\utils\
return_and_correct_aliasing (torch\utils\
__torch_dispatch__ (torch\_subclasses\
_engine_run_backward (torch\autograd\
grad (torch\autograd\
inner_fn (torch\_functorch\_aot_autograd\
inner_fn_with_anomaly (torch\_functorch\_aot_autograd\
_functionalized_f_helper (torch\_functorch\_aot_autograd\
joint_helper (torch\_functorch\_aot_autograd\
wrapped (torch\fx\experimental\