Dear all,

I have trained a ResNet model using PACT. Now I want to convert it using the static quantization Pytorch package. Can I force torch.quantization.QuantStub() to map my input to a qint8 format, that means integer values between [-128,127], instead of values between [0, 256]?

Thanks in advance,

Max

Use `qint8`

for the `dtype`

argument of the QConfig. e.g:

```
qconfig = QConfig(activation=MovingAverageMinMaxObserver.with_args(qscheme=torch.per_tensor_symmetric, dtype=torch.qint8), weight=PerChannelMinMaxObserver.with_args(dtype=torch.qint8, qscheme=torch.per_channel_symmetric))
```

Note that this will have a degraded accuracy

Dear @myshabako,

thanks for the answer. Forcing the QConfig as you suggested produces the following error

RuntimeError: expected scalar type QUInt8 but found QInt8

during the QuantizedConv2D forward. Can you know why it is not supported?

Thanks in advance,

Max

Hi @Massimiliano_Datres , you can try using the `qnnpack`

backend which supports qint8 activations for some ops. The fbgemm backend only supports quint8 activations.

1 Like