I am exploring Glow to optimize my model that has a bunch of conv + batch normalization + relu. And unfortunately I do not see a good guide for this. I looked through available options ( ./build/bin/model-compiler -help-list ) but I am lost which I should be applying.
Optimizations are done automatically. Some are dependent on the backend you’re running on. In the model snippet you have here I’d expect the BatchNorms to be optimized away (fused into Conv’s weights). You should be able to pass -dump-graph-DAG=graph.dot to see the Glow graph after optimizations are done. Some things such as Relu fusing may be done at lower levels of IR which are a bit harder to see (e.g. might be done in LLVM IR for the CPU backend or other LLVM-based backends).