Tracking progress during inference

Hi, new to using pyTorch. Im using libTorch to run various pretrained models in c++.
I was wondering, does Torch have any kind of way to track progress during inference? Maybe its even possible using the profiler or something? Is there something I can do to a model before tracing it that would allow me to get intermediate results? Struggling to find any answers for this kind of question.
Thanks all