How to handle FunctionalTensor in C++ backend

I am using a custom torch backend written in C++ for the ORT dispatch key.

I noticed that torch dynamo creates a FunctionalTensor with device set to ORT and calls the source_Storage_storage_offset op on this tensor. But my backend is unable to handle this. Do you know how the C++ backend is supposed to implement ops for FunctionalTensors?

python call stack:

alias_non_inplace_storage (torch\utils\_python_dispatch.py:400)
_correct_storage_aliasing (torch\utils\_python_dispatch.py:413)
return_and_correct_aliasing (torch\utils\_python_dispatch.py:553)
__torch_dispatch__ (torch\_subclasses\functional_tensor.py:460)
_engine_run_backward (torch\autograd\graph.py:744)
grad (torch\autograd\__init__.py:412)
inner_fn (torch\_functorch\_aot_autograd\traced_function_transforms.py:240)
inner_fn_with_anomaly (torch\_functorch\_aot_autograd\traced_function_transforms.py:255)
_functionalized_f_helper (torch\_functorch\_aot_autograd\traced_function_transforms.py:387)
joint_helper (torch\_functorch\_aot_autograd\traced_function_transforms.py:521)
wrapped (torch\fx\experimental\proxy_tensor.py:651)
...