Is it possible to quantize only activations when using QAT?

Hi, I’m trying to quantize only the activations of a model not its weights. I followed the quantization documentation for QAT provided by PyTorch. It is difficult to find a content what I want in the documentation.


PyTorch is supporting it as default_activation_only_qconfig in I’m using PyTorch 1.6.0.

To add to that,

I’m unsure about the goal here, pretty much the only reason to do QAT is to learn the best weights when weight quantization is applied. QAT’s central thesis is about how you can propagate gradients across a discontinuous function that quantizes teh weights, so doing QAT without weight quantization is a bit of a mismatch.

If you only want to quantize the activations, you may want to consider using something more suited to it.