I have trained the model
MobileNetV2 + SSD Lite in PyTorch from ‘https://github.com/qfgaohao/pytorch-ssd/blob/master/vision/ssd/mobilenet_v2_ssd_lite.py’. Now, I want use it in Raspberry Pi3.
I converted ‘.pth’ model into Caffe2 model through
ONNX representation and I got two files: init_net.pb and predict_net.pb for Caffe2 framework.
As far as I know, to accelerate the model on mobile systems such as Rpi3(B/B+) I should use the
QNNPACK lib which allows make the low-precision inference using operators with
int8 data type.
How to perform quantization of this model?
How can I make low-precision inference using
Maybe there are some tutorials about it?
This category is for Glow, which is a different PyTorch backend from Caffe2 (which "natively integrates
QNNPACK"). Glow primarily targets neural network accelerators, though it does have a CPU backend and supports automatic profiling + quantization. If you want to use
QNNPACK, I believe all you need to do is make sure your model (e.g.
predict_net) is using operators such as
Int8FC, etc. and your Caffe2 model would use it.
However, if your network is not quantized and/or you don’t want to install Caffe2 on your Raspberry Pi, you could use try to use Glow to profile your model, quantize it, and then save what we call an ahead-of-time compiled “bundle”, which is just a binary to copy to your Raspberry Pi3 to run (see docs here and here). Note that it may not perform as well as
QNNPACK; we are more focused on accelerator backends right now.
Is it possible to do a quantization of .pb in pytorch and get a quantized .pb directly? Like bazel quantization tools ? Any tutorial will be appreciated!
Thnx for answer @jfix , but I’m a bit confused.
At the moment my model
predict_net.pb does not use operators such as
Int8Conv and etc.
How do I force my model to use these operations?
Should I change operators such as
Which file should I change?
Example of some
op from my
Do I have to make changes here or not? Do replace from
type: "Conv" on
Help me deal with this, please. Thnx.
So it looks like your model is only in float right now. You cannot just simply replace
In8tConv etc. – in order to use quantization you need to know the quantization parameters to use for each operator. In Glow we call this
offset; in Caffe2 it’s called
Y_zero_point. These are usually based on actual values you expect to flow through your graph.
If you don’t know what the scales/offsets should be (likely the case), one option would be to use Glow’s profiling and quantization to quantize automatically. Like I said in my previous comment:
However, if your network is not quantized and/or you don’t want to install Caffe2 on your Raspberry Pi, you could use try to use Glow to profile your model, quantize it, and then save what we call an ahead-of-time compiled “bundle”, which is just a binary to copy to your Raspberry Pi3 to run (see docs here and here ). Note that it may not perform as well as
QNNPACK ; we are more focused on accelerator backends right now.
Again, this is not
QNNPACK; Glow does not use it. If you are interested in using
QNNPACK and Caffe2 on your Raspberry Pi then you could try asking the question in a separate category.
We always first import Caffe2 or ONNX protos, and generate them into Glow IR, and then profile/quantize the Glow IR from there. However once you’re in Glow IR, there is no current way to generate anything back out to Caffe2/ONNX/PyTorch protos, whether quantized or not.
If this might fit your needs, you can always follow this tutorial to get ONNX or Caffe2 from your PyTorch model, which you can then import to Glow.
Is there an example of taking a float32 ONNX graph, quantizing it with Glow and generate Glow IR? Is there a Python interface to Glow profile/quantization?
You can follow the instructions here on how to gather a profile of a model and then quantize the model. You just need an ONNX proto to load into Glow – see the page on Testing here which discusses how to load a model using one of our example proto model Loaders. We have some limtited support for python via PyTorch through the ONNXIFI interface – you can find info here. Otherwise it’s relatively straightforward to run the Testing Loader examples I linked to above after you’ve built in C++ and quantize/run your model.
Hi @jfix, it seems glow ./bin/image-classifier, currently support image classification type model only?
Is there a way to quantize Image Generator type model? or other kind of model? Thanks.
Hi @eric4337, we currently also have a NMT model driver called
text-translator, but it’s for pre-unrolled NMT models. We also have a
model-runner driver but it’s very simple and just for testing, for models without any non-Constant inputs.
If you want to try other models you need to create your own driver, probably based on
tools/loader/ImageClassifier.cpp if you’re interested in image-based models. This mostly would mean you are able to correctly load in and out the inputs/outputs based on their expected shape(s) and datatype(s). Also, depending on the model you may need to add additional operator support.
Once you have those things done you can quantize the model.
Look what is available here: https://github.com/opencv/openvino_training_extensions/tree/develop/pytorch_toolkit/nncf.
This is a Quantization Aware Training in PyTorch with ability to export the quantized model to ONNX.
Can you export the quantized model to onnx? thanks!