Optimization guide using model-compiler

I am exploring Glow to optimize my model that has a bunch of conv + batch normalization + relu. And unfortunately I do not see a good guide for this. I looked through available options ( ./build/bin/model-compiler -help-list ) but I am lost which I should be applying.

Here is a partial screenshot of the model:

Does model-compiler automatically apply optimizations like fusing nodes? Or one needs to apply manually?

I am using PyTorch 1.4 and exporting my model in ONNX format.

Optimizations are done automatically. Some are dependent on the backend you’re running on. In the model snippet you have here I’d expect the BatchNorms to be optimized away (fused into Conv’s weights). You should be able to pass -dump-graph-DAG=graph.dot to see the Glow graph after optimizations are done. Some things such as Relu fusing may be done at lower levels of IR which are a bit harder to see (e.g. might be done in LLVM IR for the CPU backend or other LLVM-based backends).

@antimora I am also facing problem in finding good guide/tutorial? Have you got any reference or share your own experience?

Nothing beyond what Jordan has suggested. I can confirm via graph.dot some of the nodes, such as batch norm, are fused.