Quantized Pytorch model exports to onnx

Yeah. But I didn’t see that error when I change from ‘fbgemm’ to ‘qnnpack’.

The error I have seen is same as ZyrianovS post 24 and post 25