Are there profiling tools to measure LibTorch (C++) JIT overhead vs Torch library runtime?

We convert PyTorch to TorchScript IR and use libtorch run time (cpu & gpu). We would like to profile this application to find out how much execution time is spent in JIT vs Torch libraries. Are there any recommended profiling tools/methods ?

Cheers,
Yetanadur

I’ve encountered a similar issue. Have you happened to find a good solution?