How to check the flow of execution(programs , functions invocations)

The following command is present in ModelLoader.md

${GLOW_PATH}/build/bin/image-classifier ${GLOW_PATH}/tests/images/imagenet/cat_285.png -image-mode=0to1 -m resnet18.onnx -model-input-name=data -cpu

and it gives the following output when run:

Model: resnet18.onnx
File: ${GLOW_PATH}/tests/images/imagenet/cat_285.png Label-K1: 285 (probability: 16.4305)

Please excuse me if its a trivial one …So my question is , Is there any way to see the flow of execution of that command like what all programs are being executed in which order something like that…?

Hi @Kittu28, are you asking about understanding all of the different components a model travels through from entering the image-classifier until a result is produced and printed? I’m not sure there’s a super solid way other than adding breakpoints and stepping into the program as it executes. I can give you a high level overview of what’s happening if that is of use.

Yes please give me a high level overview of what’s happening.
and also how can I add the breakpoints in the code? I don’t know which all programs are being executed.

RE: the breakpoint, you could just add a break to main and then follow the flow on down into function calls.

The overall flow looks like:

  • Load the model. This can be e.g. ONNX, Caffe2, etc. This will iterate over a protobuf for the model and create one or more Glow Nodes for every op in the model.
  • Optimize the Node IR (GraphOptimizer.cpp)
  • Quantize/add profiling nodes, and/or convert to fp16, if requested
  • Optimize Node IR some more
  • If requested by the backend, lower to other Nodes, e.g. take an LSTM and lower it to constituent nodes such as Mul, Add, Sigmoid, Tanh, etc. (Lower.cpp)
  • Optimize Node IR some more
  • Either load into the backend’s API here at Node IR level, or lower to Glow Instruction level (IRGen.cpp)
    • If using Instruction IR, perform Instruction-level optimizations (IROptimizer.cpp)
  • Backend produces some binary at this point as a [Backend]CompiledFunction object
  • Host Runtime (HostManager.cpp) now takes that CompiledFunction object and deploys to a device if needed (Provisioner.cpp)
  • For image-classifier, then images are loaded in and converted/normalized if needed, and then passed in as input tensors to the CompiledFunction to run. Results are returned and processed in image-classifier to determine the most likely label, and that’s printed.

It may be useful to take a look at our documentation on our IR:

I have deleted lib folder from glow and executed the following command.

${GLOW_PATH}/build/bin/image-classifier ${GLOW_PATH}/tests/images/imagenet/cat_285.png -image-mode=0to1 -m resnet18.onnx -model-input-name=data -cpu

BUT still i’m getting the expected output , even though there are no .cpp files present because i deleted lib folder and run the command. What did i miss here? or Anything wrong with the system of mine or this is something related to GLOW only…?

I don’t understand what you’re expecting to see when you delete the lib folder. Glow is C++, so it’s compiled into a binary. If you compile the image-classifier binary and then delete the cpp files it was compiled from the binary will still run.

If i want to run the code by changing any .cpp file(say partitionutils.cpp) how can i do it…? Do i need to re build it? how can i do it…can you please tell it …

Yes you need to rebuild. For example if you built previously by ninja image-classifier, and then you make changes to a .cpp or .h file, then you need to rerun ninja image-classifier before your command. This is how compiled languages like C++ work.

Yes…! It worked …Thank you very much…!!

1 Like