Quantization example resnet50

Hi Jordan,

Is it possible to save the quantized model as a readable file? E.g. a protobuf file where I can see the scales and zero points of each layer

We do have an ONNX exporter, but I’m not sure how nicely it handles quantized nodes. If you’re interested in using this I would create an issue on Github asking about this, tagging @putivsky.

That said if your goal is to just look at the scales and zero points, you can do that by quantizing the model and then dumping a dot file of the DAG to see all of them, e.g. if using one of our Loaders (e.g. image-classifier) using -dump-graph-DAG=graph.dot.

Hi Jordan,

When i am using image-classifier with passing only one image.png as input it works fine and generates profile file,but if i give more than one image as input it gives we error.

So is the profile file obtained will be a correct one with only “one image” and i can processed with it further or not ? What difference it makes?

Below is the Command use and error mentioned.
Thanks for reading .

Command used :-

“D:\IMXRT\nxp\Glow\bin\ image-classifier.exe images\0_1009.png images\1_1008.png images\2_1065.png images\3_1020.png images\4_1059.png images\5_1087.png images\6_1099.png images\7_1055.png images\8_1026.png images\9_1088.png -image-mode=0to1 -image-layout=NCHW -image-channel-order=BGR -model=models\mnist_7.onnx -model-input-name=Input3 -dump-profile=profile.yml”

–Terminal logs–

Model: .

Running 1 thread(s).


name : Times212_reshape0

Input : float<10 x 16 x 4 x 4>

Dims : [1, 256]

Layout : *

users : 1

Result : float<1 x 256>

Reshape into a different size

For comparison LHS Equal RHS with:

LHS: 256

RHS: 2560

WARNING: Logging before InitGoogleLogging() is written to STDERR

F0603 14:32:10.634238 15296 Error.cpp:119] exitOnError(Error) got an unexpected ErrorValue:

Error message: Function verification failed.

Hi @Rahul_Dhumal,

The number of images you provide in the command are used to determine the batch size to use when compiling the model. Your command line passes 10 images, so it tries to use a batch size of 10.

Some models have specific ops that assume/require a specific batch size, e.g. if there’s a Resize op in the model. So if the original model expected batch size 1 and the model has such an op that assumed batch size of 1, then you have to use batch size 1 as well in Glow. I’m assuming this is what is happening for your model.

If you are OK with using batch size of 1, then you can pass -minibatch=1 in your command. This will tell Glow to compile with a batch size of 1 no matter the number of images passed in the command, and then it will run them one by one. This would need to be used for both -load-profile and -dump-profile.