Would it be possible to compile a model to LLVM-IR instead of the GLOW Low-Level IR?
For context, currently we use LLVM-IR as part of our CPU Backend, which is LLVM based. We compile our
"libjit.cpp" and other similar cpp files, which contain kernels for each operator in across different precisions, to LLVM IR. And then when we load a model we generate high-level Glow IR (Nodes), then from it we generate low-level Glow IR (Instructions). Then we iterate over the Glow low-level IR and copy in kernels from our previously generated kernels that are in LLVM IR.
So, you’re wondering about skipping just the low-level IR and going from high-level IR to LLVM IR? What is the benefit/purpose here? I believe it would be possible but it would take a decent amount of work to write all the logic to map down, and I am unsure of the benefit of doing so. Would you still be using our libjit kernels?
I am looking for a way to extract that LLVM IR from the CPU Backend - is there any such way to do this?
We have a flag
-dump-llvm-ir which you can use to dump the llvm IR to stdout. Note that this only works for llvm-based backends, e.g. our CPU backend. For example:
./bin/image-classifier tests/images/imagenet/cat_285.png -use-imagenet-normalization -image-mode=0to1 -m=resnet50 -model-input-name=gpu_0/data -dump-llvm-ir -cpu
The solution you provided doesn’t work for me.
- if you don’t specify “-cpu”, no llvm ir is printed out.
- even after added “-cpu”, only libjit’s llvm ir got dumped, no model ir was dumped out. In the standalone bundle, the main.cpp calls an extern function resnet50(…), but I can’t find such function in the dumped llvm ir. The only thing close to it is something called “@jitmain”. Will jitmain got renamed to resnet50 later on? Otherwise how main.o link against it?
Is there a way to dump out the whole llvm ir of both the model itself, and the libjit?
RE: 1, Yeah sorry that was my fault. You have to be using an LLVM based backend for
-dump-llvm-ir to work correctly.
I do not know all of the details on LLVM-based backends – I would suggest asking on a GH issue via this link, and someone more knowledgable about LLVM backends and bundles will be able to answer.
You’re doing it right, actually
-cpu -dump-llvm-ir is the right way to dump LLVM IR generated by the CPU backend. The problem is the output is pretty overwhelming because you get all of libjit’s IR dumped in addition to the IR generated for your model.
You’ll see two big sections in the dumped output: “before optimizations” and “after optimizations”. The “after” section will just have your model code, since it’s after we do inlining, specialization and prune unused functions. Look for @jitmain (or @main) in either section to see where the model code starts.
how can i dump llvm ir from my ONNX file ? i tried this command,
build/bin/image-classifier tests/images/mnist/*.png -m model.onnx -model-input-name=input -dump-llvm-ir
this is not working.
my problem is solved by this command:
./bin/model-compiler -model model.onnx -mit-bundle ./mybundle --backend=CPU -dump-llvm-ir