Linear_dynamic has some problems with qnnpack

Hello everyone. I recently use the torch.quantization.quantize_dynamic(model, dtype=torch.qint8) to dynamic quantize my model. When use fbgemm(the default engine) to quantize the model, it works. However, when I change the engine from fbgemm to qnnpack, it has some problems. The way I use qnnpack is like this: model.qconfig = torch.quantization.get_default_qconfig(‘qnnpack’)
torch.backends.quantized.engine = ‘qnnpack’.
qnnpack causes the problem like that:

Has anyone ever done any related work or met the same problem?
by the way, the torch version is 1.5.0a0+b336deb

I’d appreciate if anybody can help me! Thanks in advance!

cc @supriyar do you know?

You don’t need to necessarily set the qconfig in this case to qnnpack. Please try using default_dynamic_qconfig instead and see if that helps to solve the issue. Also make sure if your build has qnnpack enabled.

I was able to get this example model to work with qnnpack backend.

class SingleLayerLinearDynamicModel(torch.nn.Module):
    def __init__(self):
        super(SingleLayerLinearDynamicModel, self).__init__()
        self.qconfig = default_qconfig
        self.fc1 = torch.nn.Linear(5, 5).to(dtype=torch.float)

    def forward(self, x):
        x = self.fc1(x)
        return x

torch.backends.quantized.engine = 'qnnpack'
base = SingleLayerLinearDynamicModel()
model = quantize_dynamic(base, dtype=torch.qint8)

thx for your reply! I found this was a bug in my code,in some iteration, I pass the tensor with the shape (0, 20, 80). In fbgemm, it does not affect the result, while in qnnpack, it will report that error in the picture. so I fix the bug in my code, then everything goes well.

1 Like