PyTorch Dynamic Quantization clarification

Hi - I am writing a script to quantize my .pth model ( universal image segmentation model) with dynamic quantization technique referred below.

https://pytorch.org/docs/stable/quantization.html

my .pth file is built on mask2former architecture which has multiheadattention module in it. Here attached screenshot where this module is not supported for dynamic quantization. Can anyone confirm on this please!!

and i can see ‘nn.embedding’ is also not supported. Can you please confirm!!

  1. yes we don’t have a dynamically quantized mha kernel so its not supported currently.

  2. the short answer is yes, dynamic quantization for embeddings/embedding bags is supported (made a PR for this here: [ao] updating embedding_bag support by HDCharles · Pull Request #107623 · pytorch/pytorch · GitHub with a test showing that it works) so thanks for catching that error.

it should be noted though that the term dynamic/static quantization is a bit overloaded. Its both a descriptor of how floating point activation qparams are calculated (statically (precomputed) or dynamically (at run time), and the name of the quantization flow to get ops that calculate qparams in the aforementioned way. Since embeddings don’t have floating point activations, you can’t actually do either to the ops, but you can obtain a weight-quantized version of the embedding op that will ‘play nice’ with actual dynamically/statically quantized ops by using the dynamic/static quantization flow.

Hi Charles, Thanks for replying.

Currently, i am trying static quantization technique with Mask2former based .pth model. But i am getting below error.

AssertionError: Embedding quantization is only supported with float_qparams_weight_only_qconfig.

Below is my model with Mask2former architecture ( modules ) where we have embedding & multiheadattention modules in it.

Mask2Former

  1. Mask2FormerPixelLevelModule
  2. Mask2FormerTransformerModule
    a. Mask2FormerSinePositionEmbedding
    b. Mask2FormerMaskedAttentionDecoder
    i. Mask2FormerMaskedAttentionDecoderLayer
    1. Mask2FormerAttention
    2. MultiheadAttention
    ii. Mask2FormerMaskPredictor
    1. Mask2FormerMLPPredictionHead
    a. Mask2FormerPredictionBlock

Here is my code:

model_fp32.qconfig = torch.ao.quantization.get_default_qconfig(‘x86’)
model_fp32 = torch.ao.quantization.fuse_modules(model_fp32,[[‘model.pixel_level_module.encoder.features.s4.b5.f.a’,‘model.pixel_level_module.encoder.features.s4.b4.f.a_bn’]])
input_fp32 = torch.randn(1, 3, 544, 960)
model_fp32(input_fp32)
#Quatization
model_prepare = torch.ao.quantization.prepare(model_fp32,inplace=True)
final_model = torch.quantization.convert(model_prepare, inplace=True
#save the model
torch.save(final_model.state_dict(),‘model_quantised.pth’)

Can you guide me with any code modifications here to make it work without any error.

You are using the same qconfig for every module in your model by applying it to the top level. If you apply the qconfigs it mentions to your embedding modules that would fix this error. (It ignores the top level qconfig for modules with their own qconfigs)

i.e. “module.qconfig=other_qconfig”

Hi Charles, thanks for the reply.

I am not able to define qconfigs to each module with my limited coding knowledge. Can you please help me with lines of code to run it without error.

PFA model architecture, which helps you to define qconfigs. Please help me!!

model.transformer_module name mod Mask2FormerTransformerModule(
(position_embedder): Mask2FormerSinePositionEmbedding()
(queries_embedder): Embedding(100, 256)
(queries_features): Embedding(100, 256)
(decoder): Mask2FormerMaskedAttentionDecoder(
(layers): ModuleList(
(0-3): 4 x Mask2FormerMaskedAttentionDecoderLayer(
(self_attn): Mask2FormerAttention(
(k_proj): Linear(in_features=256, out_features=256, bias=True)
(v_proj): Linear(in_features=256, out_features=256, bias=True)
(q_proj): Linear(in_features=256, out_features=256, bias=True)
(out_proj): Linear(in_features=256, out_features=256, bias=True)
)
(activation_fn): ReLU()
(self_attn_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(cross_attn): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
)
(cross_attn_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(fc1): Linear(in_features=256, out_features=2048, bias=True)
(fc2): Linear(in_features=2048, out_features=256, bias=True)
(final_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
)
)
(layernorm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(mask_predictor): Mask2FormerMaskPredictor(
(mask_embedder): Mask2FormerMLPPredictionHead(
(0): Mask2FormerPredictionBlock(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ReLU()
)
(1): Mask2FormerPredictionBlock(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ReLU()
)
(2): Mask2FormerPredictionBlock(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): Identity()
)
)
)
)
(level_embed): Embedding(3, 256)
)
model.transformer_module.position_embedder name mod Mask2FormerSinePositionEmbedding()
model.transformer_module.queries_embedder name mod Embedding(100, 256)
model.transformer_module.queries_features name mod Embedding(100, 256)
model.transformer_module.decoder name mod Mask2FormerMaskedAttentionDecoder(
(layers): ModuleList(
(0-3): 4 x Mask2FormerMaskedAttentionDecoderLayer(
(self_attn): Mask2FormerAttention(
(k_proj): Linear(in_features=256, out_features=256, bias=True)
(v_proj): Linear(in_features=256, out_features=256, bias=True)
(q_proj): Linear(in_features=256, out_features=256, bias=True)
(out_proj): Linear(in_features=256, out_features=256, bias=True)
)
(activation_fn): ReLU()
(self_attn_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(cross_attn): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
)
(cross_attn_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(fc1): Linear(in_features=256, out_features=2048, bias=True)
(fc2): Linear(in_features=2048, out_features=256, bias=True)
(final_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
)
)
(layernorm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(mask_predictor): Mask2FormerMaskPredictor(
(mask_embedder): Mask2FormerMLPPredictionHead(
(0): Mask2FormerPredictionBlock(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ReLU()
)
(1): Mask2FormerPredictionBlock(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ReLU()
)
(2): Mask2FormerPredictionBlock(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): Identity()
)
)
)
)
model.transformer_module.decoder.layers name mod ModuleList(
(0-3): 4 x Mask2FormerMaskedAttentionDecoderLayer(
(self_attn): Mask2FormerAttention(
(k_proj): Linear(in_features=256, out_features=256, bias=True)
(v_proj): Linear(in_features=256, out_features=256, bias=True)
(q_proj): Linear(in_features=256, out_features=256, bias=True)
(out_proj): Linear(in_features=256, out_features=256, bias=True)
)
(activation_fn): ReLU()
(self_attn_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(cross_attn): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
)
(cross_attn_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(fc1): Linear(in_features=256, out_features=2048, bias=True)
(fc2): Linear(in_features=2048, out_features=256, bias=True)
(final_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
)
)
model.transformer_module.decoder.layers.0 name mod Mask2FormerMaskedAttentionDecoderLayer(
(self_attn): Mask2FormerAttention(
(k_proj): Linear(in_features=256, out_features=256, bias=True)
(v_proj): Linear(in_features=256, out_features=256, bias=True)
(q_proj): Linear(in_features=256, out_features=256, bias=True)
(out_proj): Linear(in_features=256, out_features=256, bias=True)
)
(activation_fn): ReLU()
(self_attn_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(cross_attn): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
)
(cross_attn_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(fc1): Linear(in_features=256, out_features=2048, bias=True)
(fc2): Linear(in_features=2048, out_features=256, bias=True)
(final_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
)
model.transformer_module.decoder.layers.0.self_attn name mod Mask2FormerAttention(
(k_proj): Linear(in_features=256, out_features=256, bias=True)
(v_proj): Linear(in_features=256, out_features=256, bias=True)
(q_proj): Linear(in_features=256, out_features=256, bias=True)
(out_proj): Linear(in_features=256, out_features=256, bias=True)
)
model.transformer_module.decoder.layers.0.self_attn.k_proj name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.0.self_attn.v_proj name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.0.self_attn.q_proj name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.0.self_attn.out_proj name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.0.activation_fn name mod ReLU()
model.transformer_module.decoder.layers.0.self_attn_layer_norm name mod LayerNorm((256,), eps=1e-05, elementwise_affine=True)
model.transformer_module.decoder.layers.0.cross_attn name mod MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
)
model.transformer_module.decoder.layers.0.cross_attn.out_proj name mod NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.0.cross_attn_layer_norm name mod LayerNorm((256,), eps=1e-05, elementwise_affine=True)
model.transformer_module.decoder.layers.0.fc1 name mod Linear(in_features=256, out_features=2048, bias=True)
model.transformer_module.decoder.layers.0.fc2 name mod Linear(in_features=2048, out_features=256, bias=True)
model.transformer_module.decoder.layers.0.final_layer_norm name mod LayerNorm((256,), eps=1e-05, elementwise_affine=True)
model.transformer_module.decoder.layers.1 name mod Mask2FormerMaskedAttentionDecoderLayer(
(self_attn): Mask2FormerAttention(
(k_proj): Linear(in_features=256, out_features=256, bias=True)
(v_proj): Linear(in_features=256, out_features=256, bias=True)
(q_proj): Linear(in_features=256, out_features=256, bias=True)
(out_proj): Linear(in_features=256, out_features=256, bias=True)
)
(activation_fn): ReLU()
(self_attn_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(cross_attn): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
)
(cross_attn_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(fc1): Linear(in_features=256, out_features=2048, bias=True)
(fc2): Linear(in_features=2048, out_features=256, bias=True)
(final_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
)
model.transformer_module.decoder.layers.1.self_attn name mod Mask2FormerAttention(
(k_proj): Linear(in_features=256, out_features=256, bias=True)
(v_proj): Linear(in_features=256, out_features=256, bias=True)
(q_proj): Linear(in_features=256, out_features=256, bias=True)
(out_proj): Linear(in_features=256, out_features=256, bias=True)
)
model.transformer_module.decoder.layers.1.self_attn.k_proj name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.1.self_attn.v_proj name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.1.self_attn.q_proj name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.1.self_attn.out_proj name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.1.activation_fn name mod ReLU()
model.transformer_module.decoder.layers.1.self_attn_layer_norm name mod LayerNorm((256,), eps=1e-05, elementwise_affine=True)
model.transformer_module.decoder.layers.1.cross_attn name mod MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
)
model.transformer_module.decoder.layers.1.cross_attn.out_proj name mod NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.1.cross_attn_layer_norm name mod LayerNorm((256,), eps=1e-05, elementwise_affine=True)
model.transformer_module.decoder.layers.1.fc1 name mod Linear(in_features=256, out_features=2048, bias=True)
model.transformer_module.decoder.layers.1.fc2 name mod Linear(in_features=2048, out_features=256, bias=True)
model.transformer_module.decoder.layers.1.final_layer_norm name mod LayerNorm((256,), eps=1e-05, elementwise_affine=True)
model.transformer_module.decoder.layers.2 name mod Mask2FormerMaskedAttentionDecoderLayer(
(self_attn): Mask2FormerAttention(
(k_proj): Linear(in_features=256, out_features=256, bias=True)
(v_proj): Linear(in_features=256, out_features=256, bias=True)
(q_proj): Linear(in_features=256, out_features=256, bias=True)
(out_proj): Linear(in_features=256, out_features=256, bias=True)
)
(activation_fn): ReLU()
(self_attn_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(cross_attn): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
)
(cross_attn_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(fc1): Linear(in_features=256, out_features=2048, bias=True)
(fc2): Linear(in_features=2048, out_features=256, bias=True)
(final_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
)
model.transformer_module.decoder.layers.2.self_attn name mod Mask2FormerAttention(
(k_proj): Linear(in_features=256, out_features=256, bias=True)
(v_proj): Linear(in_features=256, out_features=256, bias=True)
(q_proj): Linear(in_features=256, out_features=256, bias=True)
(out_proj): Linear(in_features=256, out_features=256, bias=True)
)
model.transformer_module.decoder.layers.2.self_attn.k_proj name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.2.self_attn.v_proj name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.2.self_attn.q_proj name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.2.self_attn.out_proj name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.2.activation_fn name mod ReLU()
model.transformer_module.decoder.layers.2.self_attn_layer_norm name mod LayerNorm((256,), eps=1e-05, elementwise_affine=True)
model.transformer_module.decoder.layers.2.cross_attn name mod MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
)
model.transformer_module.decoder.layers.2.cross_attn.out_proj name mod NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.2.cross_attn_layer_norm name mod LayerNorm((256,), eps=1e-05, elementwise_affine=True)
model.transformer_module.decoder.layers.2.fc1 name mod Linear(in_features=256, out_features=2048, bias=True)
model.transformer_module.decoder.layers.2.fc2 name mod Linear(in_features=2048, out_features=256, bias=True)
model.transformer_module.decoder.layers.2.final_layer_norm name mod LayerNorm((256,), eps=1e-05, elementwise_affine=True)
model.transformer_module.decoder.layers.3 name mod Mask2FormerMaskedAttentionDecoderLayer(
(self_attn): Mask2FormerAttention(
(k_proj): Linear(in_features=256, out_features=256, bias=True)
(v_proj): Linear(in_features=256, out_features=256, bias=True)
(q_proj): Linear(in_features=256, out_features=256, bias=True)
(out_proj): Linear(in_features=256, out_features=256, bias=True)
)
(activation_fn): ReLU()
(self_attn_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(cross_attn): MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
)
(cross_attn_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(fc1): Linear(in_features=256, out_features=2048, bias=True)
(fc2): Linear(in_features=2048, out_features=256, bias=True)
(final_layer_norm): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
)
model.transformer_module.decoder.layers.3.self_attn name mod Mask2FormerAttention(
(k_proj): Linear(in_features=256, out_features=256, bias=True)
(v_proj): Linear(in_features=256, out_features=256, bias=True)
(q_proj): Linear(in_features=256, out_features=256, bias=True)
(out_proj): Linear(in_features=256, out_features=256, bias=True)
)
model.transformer_module.decoder.layers.3.self_attn.k_proj name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.3.self_attn.v_proj name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.3.self_attn.q_proj name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.3.self_attn.out_proj name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.3.activation_fn name mod ReLU()
model.transformer_module.decoder.layers.3.self_attn_layer_norm name mod LayerNorm((256,), eps=1e-05, elementwise_affine=True)
model.transformer_module.decoder.layers.3.cross_attn name mod MultiheadAttention(
(out_proj): NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
)
model.transformer_module.decoder.layers.3.cross_attn.out_proj name mod NonDynamicallyQuantizableLinear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.layers.3.cross_attn_layer_norm name mod LayerNorm((256,), eps=1e-05, elementwise_affine=True)
model.transformer_module.decoder.layers.3.fc1 name mod Linear(in_features=256, out_features=2048, bias=True)
model.transformer_module.decoder.layers.3.fc2 name mod Linear(in_features=2048, out_features=256, bias=True)
model.transformer_module.decoder.layers.3.final_layer_norm name mod LayerNorm((256,), eps=1e-05, elementwise_affine=True)
model.transformer_module.decoder.layernorm name mod LayerNorm((256,), eps=1e-05, elementwise_affine=True)
model.transformer_module.decoder.mask_predictor name mod Mask2FormerMaskPredictor(
(mask_embedder): Mask2FormerMLPPredictionHead(
(0): Mask2FormerPredictionBlock(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ReLU()
)
(1): Mask2FormerPredictionBlock(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ReLU()
)
(2): Mask2FormerPredictionBlock(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): Identity()
)
)
)
model.transformer_module.decoder.mask_predictor.mask_embedder name mod Mask2FormerMLPPredictionHead(
(0): Mask2FormerPredictionBlock(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ReLU()
)
(1): Mask2FormerPredictionBlock(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ReLU()
)
(2): Mask2FormerPredictionBlock(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): Identity()
)
)
model.transformer_module.decoder.mask_predictor.mask_embedder.0 name mod Mask2FormerPredictionBlock(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ReLU()
)
model.transformer_module.decoder.mask_predictor.mask_embedder.0.0 name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.mask_predictor.mask_embedder.0.1 name mod ReLU()
model.transformer_module.decoder.mask_predictor.mask_embedder.1 name mod Mask2FormerPredictionBlock(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): ReLU()
)
model.transformer_module.decoder.mask_predictor.mask_embedder.1.0 name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.mask_predictor.mask_embedder.1.1 name mod ReLU()
model.transformer_module.decoder.mask_predictor.mask_embedder.2 name mod Mask2FormerPredictionBlock(
(0): Linear(in_features=256, out_features=256, bias=True)
(1): Identity()
)
model.transformer_module.decoder.mask_predictor.mask_embedder.2.0 name mod Linear(in_features=256, out_features=256, bias=True)
model.transformer_module.decoder.mask_predictor.mask_embedder.2.1 name mod Identity()
model.transformer_module.level_embed name mod Embedding(3, 256)
class_predictor name mod Linear(in_features=256, out_features=29, bias=True)
criterion name mod Mask2FormerLoss(
(matcher): Mask2FormerHungarianMatcher()
)
criterion.matcher name mod Mask2FormerHungarianMatcher()

get the fqn of your embedding module and then just set module.qconfig = different_qconfig

e.g. model.transformer_module.position_embedder.qconfig = different_qconfig

do that for each embedding layer