How do we add custom backend for PyTorch profiler?

I see torch.profiler.ProfilerActivity objects for CPU and CUDA; how can I add more for custom accelerator like TPU etc. ?
Please advise any good starting points to add events into the PyTorch profiler infrastructure from custom hardware.

TIA

I’m not sure this part of PyTorch is very flexibly extensible right now. Quite likely, I would start by looking into kineto to see either how to extend it to support your hardware or what kind of API PyTorch uses. Then look at the PyTorch use of kineto.

Best regards

Thomas

thanks Mr. Thomas. Appreciate your insight.

Hi guys!
How we may measure time of some operation on Google Colab for TPU ?
PyTorch Profiler — PyTorch Tutorials 1.11.0+cu102 documentation - pytorch profiler support only CPU and CUDA devices, Libkineto support NVIDIA GPUs.
Is exist ready method for measure consume time of some operation on Google Colab for TPU?

To use time.time_ns() measure only consume time of CPU on VM with TPU - how i understood.

Thanks!