@Raghav_Gurbaxani, have you tried using histogram observer for activation? In most cases this could improve the accuracy of the quantized model. You can do:
model.qconfig = torch.quantization.QConfig(
activation=torch.quantization.default_histogram_observer,
weight=torch.quantization.default_per_channel_weight_observer)
thanks @hx89 , if you could post that example for compare module level quantization error  It would be great
In the meantime, I tried the histogram observer and the result is still pretty bad
any other suggestions ?
Have you checked the accuracy of fused_model? By checking the accuracy of fused_model before converting to int8 model we can know if the issue is in the preprocessing part or in the quantized model.
If fused_model has good accuracy, the next step we can check the quantization error of the weights. Could you try the following code:
def l2_error(ref_tensor, new_tensor):
"""Compute the l2 error between two tensors.
Args:
ref_tensor (numpy array): Reference tensor.
new_tensor (numpy array): New tensor to compare with.
Returns:
abs_error: l2 error
relative_error: relative l2 error
"""
assert (
ref_tensor.shape == new_tensor.shape
), "The shape between two tensors is different"
diff = new_tensor  ref_tensor
abs_error = np.linalg.norm(diff)
ref_norm = np.linalg.norm(ref_tensor)
if ref_norm == 0:
if np.allclose(ref_tensor, new_tensor):
relative_error = 0
else:
relative_error = np.inf
else:
relative_error = np.linalg.norm(diff) / ref_norm
return abs_error, relative_error
float_model_dbg = fused_model
qmodel_dbg = quantized
for key in float_model_dbg.state_dict().keys():
float_w = float_model_dbg.state_dict()[key]
qkey = key
# Get rid of extra hiearchy of the fused Conv in float model
if key.endswith('.weight'):
qkey = key[:9] + key[7:]
if qkey in qmodel_dbg.state_dict():
q_w = qmodel_dbg.state_dict()[qkey]
if q_w.dtype == torch.float:
abs_error, relative_error = l2_error(float_w.numpy(), q_w.detach().numpy())
else:
abs_error, relative_error = l2_error(float_w.numpy(), q_w.dequantize().numpy())
print(key, ', abs error = ', abs_error, ", relative error = ", relative_error)
It should print out the quantization error for each Conv weight such as:
features.0.0.weight , abs error = 0.21341866 , relative error = 0.01703797
features.3.squeeze.0.weight , abs error = 0.095942035 , relative error = 0.012483358
features.3.expand1x1.0.weight , abs error = 0.071949296 , relative error = 0.010309489
features.3.expand3x3.0.weight , abs error = 0.18284422 , relative error = 0.025256516
features.4.squeeze.0.weight , abs error = 0.088713735 , relative error = 0.011313644
features.4.expand1x1.0.weight , abs error = 0.0780085 , relative error = 0.0126931975
...
@hx89 the performance of the fused model is good
That means there’s something wrong on the quantization side, not the fusion side.
Here’s the log of the relative norm errors 
Can you suggest what to do next ? Is there any way to reduce these errors ? Apart from QAT ofcourse
Looks like the first Conv basenet.slice1.3.0.weight has the largest error, could you try skipping the quantization of that Conv and keep it as the float module? We have previously seen some CV models’s first Conv is sensitive to quantization and skipping it would give better accuracy.
@hx89 actually it seems like all these have pretty high relative errors 
[ basenet.slice1.7.0.weight , basenet.slice1.10.0.weight , basenet.slice2.14.0.weight , basenet.slice2.17.0.weight , basenet.slice3.20.0.weight ,basenet.slice3.24.0.weight , basenet.slice3.27.0.weight , basenet.slice4.30.0.weight ,basenet.slice4.34.0.weight ]
although that seems like a good idea, keeping a few layers as float while converting the rest to int8.
I am not sure how to pass the partial model to torch.quantization.convert() for quantization and then combining the partially quantized model and unquantized layers together for inference on the image.
Could you provide an example ? Thanks a ton
It’s actually simpler, to skip the first conv for example, there are two step:
Step 1: Move the quant stub after the first conv in the forward function of the module.
For example in the original quantizable module, quant stub is at the beginning before conv1:
Class QuantizableNet(nn.Module):
def __init__(self):
...
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.conv1(x)
x = self.maxpool(x)
x = self.fc(x)
x = self.dequant(x)
return x
To skip the quantization of conv1 we can move self.quant() aftert conv1:
Class QuantizableNet(nn.Module):
def __init__(self):
...
self.quant = torch.quantization.QuantStub()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.conv1(x)
x = self.quant(x)
x = self.maxpool(x)
x = self.fc(x)
x = self.dequant(x)
return x
Step 2: Then we need to set the qconfig of conv1 to None after prepare(), this way PyTorch knows we want to keep conv1 as float module and won’t swap it with quantized module:
model = QuantizableNet()
...
torch.quantization.prepare(model)
model.conv1.qconfig = None
@hx89
Thank you for your advice. I tried placing the Quantstub after slice4 in basenet (line 157) and DequantStub at the end. Also I set the qconfig of slice14 as None.
But now I get the error
RuntimeError: All dtypes must be the same. (quantized_cat at /Users/distiller/project/conda/condabld/pytorch_1570710797334/work/aten/src/ATen/native/quantized/cpu/qconcat.cpp:59)
raised by self.skip_add.cat() (line88)
My guess is it’s trying to concat between fp32 and int8 tensors hence the problem,
I tried moving my quantstub around, but my network has a lot of concat layers so I always incur this problem.
Any ideas on how to deal with this issue ?
Thanks again for your help so far
There are a couple things I noticed in your partial_quantized_craft.py:

In line 103: y=self.basenet.dequant(y), it would be better to define the dequant in CRAFT class and use it instead of using the dequant from basenet.

For the error you got, it’s because you moved quant() down so h_relu2_2 became float for example. You may add a quant stub so that the output is still in int8:
...
h_relu2_2 = h
...
h_relu2_2_int8 = self.quant2(h_relu2_2)
...
out = vgg_outputs(h_fc7, h_relu5_3, h_relu4_3, h_relu3_2, h_relu2_2_int8)
Notice you can’t reuse the same quant stub and need to create new one since each quant stub will have different quantization parameters.
 Could you try only quantize vgg16_bn first to see how is the accuracy? If the accuracy is good, since vgg16_bn is the dominant part of computation, we’ve already had a lot of performance gain, then can move to quantizing the outer class. To quantize vgg16_bn only, you can do the following:
def forward(self, X):
X=self.quant(X)
h = self.slice1(X)
h_relu2_2 = h
h = self.slice2(h)
h_relu3_2 = h
h = self.slice3(h)
h_relu4_3 = h
h = self.slice4(h)
h=self.quant(h)
h_relu5_3 = h
h = self.slice5(h)
h_fc7 = h
h_fc7 = self.dequant1(h_fc7)
h_relu5_3 = self.dequant2(h_relu5_3)
h_relu4_3 = self.dequant3(h_relu4_3)
h_relu3_2 = self.dequant4(h_relu3_2)
h_relu2_2 = self.dequant5(h_relu2_2)
vgg_outputs = namedtuple("VggOutputs", ['fc7', 'relu5_3', 'relu4_3', 'relu3_2', 'relu2_2'])
out = vgg_outputs(h_fc7, h_relu5_3, h_relu4_3, h_relu3_2, h_relu2_2)
return out
Hi Raghav,
I see one more error. You are using the same float functional module at multiple locations:
and https://github.com/raghavgurbaxani/experiments/blob/master/partial_quantized_craft.py#L88 etc. This will cause the activations to be quantized incorrectly. A float functional module can be used only once as each module collects statistics on activations. Can you make all of them unique?
@hx89 Thank you so much for your advice. Based on points 1&2, I tried several configurations and they worked much better.
and the model size reduced from 84 MB to 36 MB (quant() placed after slice 2 in vgg)
Here’s another result from a model of 75 MB (quant() placed after slice 4 in vgg)
I am still trying other configurations to improve my results.
In the meantime, I also want to try your configuration (quantize vgg_bn only), could you explain in your code why we have 2 quant() flags , and 5 dequant() flags ? What parts of the qconfig must be set to None ?
Thanks again for your help
This is great to see the accuracy is getting better!
Are these results obtained after fixing the issue @raghuramank100 pointed out?
For option 3 there’s a typo, there should only be one quant() as:
def forward(self, X):
X=self.quant(X)
h = self.slice1(X)
h_relu2_2 = h
h = self.slice2(h)
h_relu3_2 = h
h = self.slice3(h)
h_relu4_3 = h
h = self.slice4(h)
h_relu5_3 = h
h = self.slice5(h)
h_fc7 = h
h_fc7 = self.dequant(h_fc7)
h_relu5_3 = self.dequant(h_relu5_3)
h_relu4_3 = self.dequant(h_relu4_3)
h_relu3_2 = self.dequant(h_relu3_2)
h_relu2_2 = self.dequant(h_relu2_2)
vgg_outputs = namedtuple("VggOutputs", ['fc7', 'relu5_3', 'relu4_3', 'relu3_2', 'relu2_2'])
out = vgg_outputs(h_fc7, h_relu5_3, h_relu4_3, h_relu3_2, h_relu2_2)
return out
And we may not need 5 dequant(), previously I thought each output activation has different distribution so we need 5 of them so that each dequant collects statistics of the specific output activations. But in fact the input of the dequant is already the int8 activation with qparames and dequant doesn’t have state, so we can share dequant().
If you make changes above, you can just set qconfig at model.basenet level instead of model level:
model.basenet.qconfig = torch.quantization.QConfig(activation=torch.quantization.default_histogram_observer,weight=torch.quantization.default_per_channel_weight_observer)
I think this way you don’t need to set any qconfig to be None and PyTorch will only quantize the basenet.
Yes, dequants can be shared as they are stateless. quant() cannot be shared as it collects statistics.
@hx89
Thanks for your help. I tried quantizing only the VGG16 basenet part as per your suggestion, the network compressed from 84MB to 28 MB. Here’s the result 
Although bounding boxes are well aligned, it completely misses out on ‘23’. I still need to figure out the optimum configuration for quantization.
Do you think training on these quantized weights for a few epochs may help ? Any other quantization improvements I can try ?
Thanks again.
I think you are very close to the accuracy of the float model, next you can try skip the Conv layers in basenet one by one until the accuracy is acceptable.
Another possible way to improve accuracy is quantization aware training, which is similar to the idea you mentioned. There’s a reference script in torchvision you can take a look:
Hi @Raghav_Gurbaxani, just want to check if you were able to achieve acceptable accuracy for quantized model? It would be great if you could share some updates
hey @hx89 @raghuramank100 thank you for all your help. The static and dynamic quantization worked well. I am trying out quantized aware training now.
I am trying to quantize a text detection model based on Mobilenet (model definition here )
After inserting the quant and dequant stub, fusing all the conv+bn+relu and conv+relu, replacing cat with skip_add.cat() . I perform the static quantization (script  https://github.com/raghavgurbaxani/Quantization_Experiments/blob/master/try_quantization.py )
After performing quantization, the model size doesn’t go down (in fact it increases )
Original Size:
Size (MB): 6.623636
Fused model Size:
Size (MB): 6.638188
Quantized model Size:
Size (MB): 7.928258
I have even printed the final quantized model here
I changed the qconfig to fused_model.qconfig = torch.quantization.default_qconfig
but still quantized_model size is Size (MB): 6.715115
Why doesn’t the model size reduce ?
hi @Raghav_Gurbaxani , I am also trying to quantize CRAFT model. could you share your awesome work… !
that is unexpected, could you print the model before and after quantization? looks like the one in Quantization_Experiments/quantized_model.txt at master · raghavgurbaxani/Quantization_Experiments · GitHub only has a part of the model quantized?