Strange behaviour with qnnpack accuracy

I’m working with timms mobilenetv2_100

import timm
model = timm.create_model('mobilenetv2_100')

and FX post-training static quantization.

I’m getting a very strange behaviour regarding the quantized model accuracy. I’d be happy to provide more detail if this question gets interest but for now here are the clues:

  • Using get_default_qconfig("fgemm") I get 100% accuracy (I’m only testing 10 samples so this is fair).
  • Using get_default_qconfig("qnnpack") I get 0% accuracy BUT read on for the interesting clues.
  • If I only quantize some of the backbone blocks rather than the whole model I can recover 100% accuracy with get_default_qconfig("qnnpack")
    • Quantize only blocks [0] → 100%
    • [0, 2, 3, 4, 5] → 100%
    • [0, 1, 2, 3, 4, 5] → 0%
    • [1] → 100%
    • [0, 2, 3, 4, 5, 6] → 0%
    • [6] → 0%
  • All the above results are gathered by running the model with the regular torch backend. When I set `torch.backends.quantized.engine = ‘qnnpack’ even the cases where I previously recovered 100% go to 0%.

Where could I go from here to understand what’s going on? I’m new to quantization so I’m not necessarily aware of the options.

Could it be that there’s some under/overflow issue which just randomly happens according to some combination of blocks quantized?

For reference, the blocks of the model look like:

(blocks): Sequential(
    (0): Sequential(
      (0): DepthwiseSeparableConv(
        (conv_dw): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
        (bn1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act1): ReLU6(inplace=True)
        (se): Identity()
        (conv_pw): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act2): Identity()
      )
    )
    (1): Sequential(
      (0): InvertedResidual(
        (conv_pw): Conv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act1): ReLU6(inplace=True)
        (conv_dw): Conv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=96, bias=False)
        (bn2): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act2): ReLU6(inplace=True)
        (se): Identity()
        (conv_pwl): Conv2d(96, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (1): InvertedResidual(
        (conv_pw): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act1): ReLU6(inplace=True)
        (conv_dw): Conv2d(144, 144, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=144, bias=False)
        (bn2): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act2): ReLU6(inplace=True)
        (se): Identity()
        (conv_pwl): Conv2d(144, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (2): Sequential(
      (0): InvertedResidual(
        (conv_pw): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act1): ReLU6(inplace=True)
        (conv_dw): Conv2d(144, 144, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=144, bias=False)
        (bn2): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act2): ReLU6(inplace=True)
        (se): Identity()
        (conv_pwl): Conv2d(144, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (1): InvertedResidual(
        (conv_pw): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act1): ReLU6(inplace=True)
        (conv_dw): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False)
        (bn2): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act2): ReLU6(inplace=True)
        (se): Identity()
        (conv_pwl): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (2): InvertedResidual(
        (conv_pw): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act1): ReLU6(inplace=True)
        (conv_dw): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False)
        (bn2): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act2): ReLU6(inplace=True)
        (se): Identity()
        (conv_pwl): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (3): Sequential(
      (0): InvertedResidual(
        (conv_pw): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act1): ReLU6(inplace=True)
        (conv_dw): Conv2d(192, 192, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=192, bias=False)
        (bn2): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act2): ReLU6(inplace=True)
        (se): Identity()
        (conv_pwl): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (1): InvertedResidual(
        (conv_pw): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act1): ReLU6(inplace=True)
        (conv_dw): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
        (bn2): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act2): ReLU6(inplace=True)
        (se): Identity()
        (conv_pwl): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (2): InvertedResidual(
        (conv_pw): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act1): ReLU6(inplace=True)
        (conv_dw): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
        (bn2): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act2): ReLU6(inplace=True)
        (se): Identity()
        (conv_pwl): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (3): InvertedResidual(
        (conv_pw): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act1): ReLU6(inplace=True)
        (conv_dw): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
        (bn2): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act2): ReLU6(inplace=True)
        (se): Identity()
        (conv_pwl): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (4): Sequential(
      (0): InvertedResidual(
        (conv_pw): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act1): ReLU6(inplace=True)
        (conv_dw): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False)
        (bn2): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act2): ReLU6(inplace=True)
        (se): Identity()
        (conv_pwl): Conv2d(384, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (1): InvertedResidual(
        (conv_pw): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act1): ReLU6(inplace=True)
        (conv_dw): Conv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=576, bias=False)
        (bn2): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act2): ReLU6(inplace=True)
        (se): Identity()
        (conv_pwl): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (2): InvertedResidual(
        (conv_pw): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act1): ReLU6(inplace=True)
        (conv_dw): Conv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=576, bias=False)
        (bn2): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act2): ReLU6(inplace=True)
        (se): Identity()
        (conv_pwl): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (5): Sequential(
      (0): InvertedResidual(
        (conv_pw): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act1): ReLU6(inplace=True)
        (conv_dw): Conv2d(576, 576, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=576, bias=False)
        (bn2): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act2): ReLU6(inplace=True)
        (se): Identity()
        (conv_pwl): Conv2d(576, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (1): InvertedResidual(
        (conv_pw): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act1): ReLU6(inplace=True)
        (conv_dw): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)
        (bn2): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act2): ReLU6(inplace=True)
        (se): Identity()
        (conv_pwl): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (2): InvertedResidual(
        (conv_pw): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act1): ReLU6(inplace=True)
        (conv_dw): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)
        (bn2): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act2): ReLU6(inplace=True)
        (se): Identity()
        (conv_pwl): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (6): Sequential(
      (0): InvertedResidual(
        (conv_pw): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act1): ReLU6(inplace=True)
        (conv_dw): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False)
        (bn2): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act2): ReLU6(inplace=True)
        (se): Identity()
        (conv_pwl): Conv2d(960, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )

could you print the quantized model?

@jerryzh168 here it is (part 1 of 3)


(conv_pwl): QuantizedConv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), scale=0.2908034026622772, zero_point=129)
        )
      )
      (3): Module(
        (0): Module(
          (conv_pw): QuantizedConv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), scale=0.13633513450622559, zero_point=106)
          (act1): ReLU6(inplace=True)
          (conv_dw): QuantizedConv2d(192, 192, kernel_size=(3, 3), stride=(2, 2), scale=0.11250900477170944, zero_point=133, padding=(1, 1), groups=192)
          (act2): ReLU6(inplace=True)
          (se): Identity()
          (conv_pwl): QuantizedConv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), scale=0.3205368220806122, zero_point=131)
        )
        (1): Module(
          (conv_pw): QuantizedConv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), scale=0.10603202879428864, zero_point=118)
          (act1): ReLU6(inplace=True)
          (conv_dw): QuantizedConv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), scale=0.1964792162179947, zero_point=146, padding=(1, 1), groups=384)
          (act2): ReLU6(inplace=True)
          (se): Identity()
          (conv_pwl): QuantizedConv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), scale=0.21481290459632874, zero_point=135)
        )
        (2): Module(
          (conv_pw): QuantizedConv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), scale=0.08830209076404572, zero_point=117)
          (act1): ReLU6(inplace=True)
          (conv_dw): QuantizedConv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), scale=0.2017802745103836, zero_point=95, padding=(1, 1), groups=384)
          (act2): ReLU6(inplace=True)
          (se): Identity()
          (conv_pwl): QuantizedConv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), scale=0.16327263414859772, zero_point=126)
        )
        (3): Module(
          (conv_pw): QuantizedConv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), scale=0.08116503804922104, zero_point=117)
          (act1): ReLU6(inplace=True)
          (conv_dw): QuantizedConv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), scale=0.10571161657571793, zero_point=157, padding=(1, 1), groups=384)
          (act2): ReLU6(inplace=True)
          (se): Identity()
          (conv_pwl): QuantizedConv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), scale=0.1567448079586029, zero_point=134)
        )
      )
      (4): Module(
        (0): Module(
          (conv_pw): QuantizedConv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), scale=0.13676510751247406, zero_point=108)
          (act1): ReLU6(inplace=True)
          (conv_dw): QuantizedConv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), scale=0.26821452379226685, zero_point=88, padding=(1, 1), groups=384)
          (act2): ReLU6(inplace=True)
          (se): Identity()
          (conv_pwl): QuantizedConv2d(384, 96, kernel_size=(1, 1), stride=(1, 1), scale=0.2556881308555603, zero_point=119)
        )
        (1): Module(
          (conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=0.0698079839348793, zero_point=121)
          (act1): ReLU6(inplace=True)
          (conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=0.08290387690067291, zero_point=148, padding=(1, 1), groups=576)
          (act2): ReLU6(inplace=True)
          (se): Identity()
          (conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=0.16372020542621613, zero_point=123)
        )
        (2): Module(
          (conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=0.06444422900676727, zero_point=141)
          (act1): ReLU6(inplace=True)
          (conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), scale=0.07609664648771286, zero_point=142, padding=(1, 1), groups=576)
          (act2): ReLU6(inplace=True)
          (se): Identity()
          (conv_pwl): QuantizedConv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), scale=0.1533583104610443, zero_point=131)
        )
      )
      (5): Module(
        (0): Module(
          (conv_pw): QuantizedConv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), scale=0.0860033854842186, zero_point=141)
          (act1): ReLU6(inplace=True)
          (conv_dw): QuantizedConv2d(576, 576, kernel_size=(3, 3), stride=(2, 2), scale=0.07259328663349152, zero_point=121, padding=(1, 1), groups=576)
          (act2): ReLU6(inplace=True)
          (se): Identity()
          (conv_pwl): QuantizedConv2d(576, 160, kernel_size=(1, 1), stride=(1, 1), scale=0.22664271295070648, zero_point=119)
        )
        (1): Module(
          (conv_pw): QuantizedConv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), scale=0.0699293464422226, zero_point=116)
          (act1): ReLU6(inplace=True)
          (conv_dw): QuantizedConv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), scale=0.07911744713783264, zero_point=144, padding=(1, 1), groups=960)
          (act2): ReLU6(inplace=True)
          (se): Identity()
          (conv_pwl): QuantizedConv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), scale=0.12189313769340515, zero_point=130)
        )
        (2): Module(
          (conv_pw): QuantizedConv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), scale=0.06675049662590027, zero_point=129)
          (act1): ReLU6(inplace=True)
          (conv_dw): QuantizedConv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), scale=0.07856452465057373, zero_point=159, padding=(1, 1), groups=960)
          (act2): ReLU6(inplace=True)
          (se): Identity()
          (conv_pwl): QuantizedConv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), scale=0.1353532373905182, zero_point=132)
        )
      )
      (6): Module(
        (0): Module(
          (conv_pw): QuantizedConv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), scale=0.10627786070108414, zero_point=105)
          (act1): ReLU6(inplace=True)
          (conv_dw): QuantizedConv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), scale=0.15888939797878265, zero_point=161, padding=(1, 1), groups=960)
          (act2): ReLU6(inplace=True)
          (se): Identity()
          (conv_pwl): QuantizedConv2d(960, 320, kernel_size=(1, 1), stride=(1, 1), scale=0.09197470545768738, zero_point=130)
        )
      )
    )
    (conv_head): QuantizedConv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1), scale=0.13624273240566254, zero_point=160)
    (act2): ReLU6(inplace=True)
    (global_pool): Module(
      (pool): Identity()
    )
    (classifier): Identity()
  )
  (rnn): GRU(1280, 128, batch_first=True, bidirectional=True)
  (relu): ReLU()
  (dropout): Dropout(p=0.3, inplace=False)
  (fc): QuantizedLinear(in_features=256, out_features=11, scale=0.06948493421077728, zero_point=120, qscheme=torch.per_tensor_affine)
)

(part 2 of 3)

import torch
def forward(self, img):
    feature_extractor_conv_stem_input_scale_0 = self.feature_extractor_conv_stem_input_scale_0
    feature_extractor_conv_stem_input_zero_point_0 = self.feature_extractor_conv_stem_input_zero_point_0
    feature_extractor_conv_stem_input_dtype_0 = self.feature_extractor_conv_stem_input_dtype_0
    quantize_per_tensor_1 = torch.quantize_per_tensor(img, feature_extractor_conv_stem_input_scale_0, feature_extractor_conv_stem_input_zero_point_0, feature_extractor_conv_stem_input_dtype_0);  img = feature_extractor_conv_stem_input_scale_0 = feature_extractor_conv_stem_input_zero_point_0 = feature_extractor_conv_stem_input_dtype_0 = None
    feature_extractor_conv_stem = self.feature_extractor.conv_stem(quantize_per_tensor_1);  quantize_per_tensor_1 = None
    feature_extractor_act1 = self.feature_extractor.act1(feature_extractor_conv_stem);  feature_extractor_conv_stem = None
    feature_extractor_blocks_0_0_conv_dw = getattr(getattr(self.feature_extractor.blocks, "0"), "0").conv_dw(feature_extractor_act1);  feature_extractor_act1 = None
    feature_extractor_blocks_0_0_act1 = getattr(getattr(self.feature_extractor.blocks, "0"), "0").act1(feature_extractor_blocks_0_0_conv_dw);  feature_extractor_blocks_0_0_conv_dw = None
    dequantize_1 = feature_extractor_blocks_0_0_act1.dequantize();  feature_extractor_blocks_0_0_act1 = None
    feature_extractor_blocks_0_0_se = getattr(getattr(self.feature_extractor.blocks, "0"), "0").se(dequantize_1);  dequantize_1 = None
    feature_extractor_blocks_0_0_conv_pw_input_scale_0 = self.feature_extractor_blocks_0_0_conv_pw_input_scale_0
    feature_extractor_blocks_0_0_conv_pw_input_zero_point_0 = self.feature_extractor_blocks_0_0_conv_pw_input_zero_point_0
    feature_extractor_blocks_0_0_conv_pw_input_dtype_0 = self.feature_extractor_blocks_0_0_conv_pw_input_dtype_0
    quantize_per_tensor_2 = torch.quantize_per_tensor(feature_extractor_blocks_0_0_se, feature_extractor_blocks_0_0_conv_pw_input_scale_0, feature_extractor_blocks_0_0_conv_pw_input_zero_point_0, feature_extractor_blocks_0_0_conv_pw_input_dtype_0);  feature_extractor_blocks_0_0_se = feature_extractor_blocks_0_0_conv_pw_input_scale_0 = feature_extractor_blocks_0_0_conv_pw_input_zero_point_0 = feature_extractor_blocks_0_0_conv_pw_input_dtype_0 = None
    feature_extractor_blocks_0_0_conv_pw = getattr(getattr(self.feature_extractor.blocks, "0"), "0").conv_pw(quantize_per_tensor_2);  quantize_per_tensor_2 = None
    dequantize_2 = feature_extractor_blocks_0_0_conv_pw.dequantize();  feature_extractor_blocks_0_0_conv_pw = None
    feature_extractor_blocks_0_0_act2 = getattr(getattr(self.feature_extractor.blocks, "0"), "0").act2(dequantize_2);  dequantize_2 = None
    feature_extractor_blocks_1_0_conv_pw_input_scale_0 = self.feature_extractor_blocks_1_0_conv_pw_input_scale_0
    feature_extractor_blocks_1_0_conv_pw_input_zero_point_0 = self.feature_extractor_blocks_1_0_conv_pw_input_zero_point_0
    feature_extractor_blocks_1_0_conv_pw_input_dtype_0 = self.feature_extractor_blocks_1_0_conv_pw_input_dtype_0
    quantize_per_tensor_3 = torch.quantize_per_tensor(feature_extractor_blocks_0_0_act2, feature_extractor_blocks_1_0_conv_pw_input_scale_0, feature_extractor_blocks_1_0_conv_pw_input_zero_point_0, feature_extractor_blocks_1_0_conv_pw_input_dtype_0);  feature_extractor_blocks_0_0_act2 = feature_extractor_blocks_1_0_conv_pw_input_scale_0 = feature_extractor_blocks_1_0_conv_pw_input_zero_point_0 = feature_extractor_blocks_1_0_conv_pw_input_dtype_0 = None
    feature_extractor_blocks_1_0_conv_pw = getattr(getattr(self.feature_extractor.blocks, "1"), "0").conv_pw(quantize_per_tensor_3);  quantize_per_tensor_3 = None
    feature_extractor_blocks_1_0_act1 = getattr(getattr(self.feature_extractor.blocks, "1"), "0").act1(feature_extractor_blocks_1_0_conv_pw);  feature_extractor_blocks_1_0_conv_pw = None
    feature_extractor_blocks_1_0_conv_dw = getattr(getattr(self.feature_extractor.blocks, "1"), "0").conv_dw(feature_extractor_blocks_1_0_act1);  feature_extractor_blocks_1_0_act1 = None
    feature_extractor_blocks_1_0_act2 = getattr(getattr(self.feature_extractor.blocks, "1"), "0").act2(feature_extractor_blocks_1_0_conv_dw);  feature_extractor_blocks_1_0_conv_dw = None
    dequantize_3 = feature_extractor_blocks_1_0_act2.dequantize();  feature_extractor_blocks_1_0_act2 = None
    feature_extractor_blocks_1_0_se = getattr(getattr(self.feature_extractor.blocks, "1"), "0").se(dequantize_3);  dequantize_3 = None
    feature_extractor_blocks_1_0_conv_pwl_input_scale_0 = self.feature_extractor_blocks_1_0_conv_pwl_input_scale_0
    feature_extractor_blocks_1_0_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_1_0_conv_pwl_input_zero_point_0
    feature_extractor_blocks_1_0_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_1_0_conv_pwl_input_dtype_0
    quantize_per_tensor_4 = torch.quantize_per_tensor(feature_extractor_blocks_1_0_se, feature_extractor_blocks_1_0_conv_pwl_input_scale_0, feature_extractor_blocks_1_0_conv_pwl_input_zero_point_0, feature_extractor_blocks_1_0_conv_pwl_input_dtype_0);  feature_extractor_blocks_1_0_se = feature_extractor_blocks_1_0_conv_pwl_input_scale_0 = feature_extractor_blocks_1_0_conv_pwl_input_zero_point_0 = feature_extractor_blocks_1_0_conv_pwl_input_dtype_0 = None
    feature_extractor_blocks_1_0_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "1"), "0").conv_pwl(quantize_per_tensor_4);  quantize_per_tensor_4 = None
    feature_extractor_blocks_1_1_conv_pw = getattr(getattr(self.feature_extractor.blocks, "1"), "1").conv_pw(feature_extractor_blocks_1_0_conv_pwl)
    feature_extractor_blocks_1_1_act1 = getattr(getattr(self.feature_extractor.blocks, "1"), "1").act1(feature_extractor_blocks_1_1_conv_pw);  feature_extractor_blocks_1_1_conv_pw = None
    feature_extractor_blocks_1_1_conv_dw = getattr(getattr(self.feature_extractor.blocks, "1"), "1").conv_dw(feature_extractor_blocks_1_1_act1);  feature_extractor_blocks_1_1_act1 = None
    feature_extractor_blocks_1_1_act2 = getattr(getattr(self.feature_extractor.blocks, "1"), "1").act2(feature_extractor_blocks_1_1_conv_dw);  feature_extractor_blocks_1_1_conv_dw = None
    dequantize_4 = feature_extractor_blocks_1_1_act2.dequantize();  feature_extractor_blocks_1_1_act2 = None
    feature_extractor_blocks_1_1_se = getattr(getattr(self.feature_extractor.blocks, "1"), "1").se(dequantize_4);  dequantize_4 = None
    feature_extractor_blocks_1_1_conv_pwl_input_scale_0 = self.feature_extractor_blocks_1_1_conv_pwl_input_scale_0
    feature_extractor_blocks_1_1_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_1_1_conv_pwl_input_zero_point_0
    feature_extractor_blocks_1_1_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_1_1_conv_pwl_input_dtype_0
    quantize_per_tensor_5 = torch.quantize_per_tensor(feature_extractor_blocks_1_1_se, feature_extractor_blocks_1_1_conv_pwl_input_scale_0, feature_extractor_blocks_1_1_conv_pwl_input_zero_point_0, feature_extractor_blocks_1_1_conv_pwl_input_dtype_0);  feature_extractor_blocks_1_1_se = feature_extractor_blocks_1_1_conv_pwl_input_scale_0 = feature_extractor_blocks_1_1_conv_pwl_input_zero_point_0 = feature_extractor_blocks_1_1_conv_pwl_input_dtype_0 = None
    feature_extractor_blocks_1_1_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "1"), "1").conv_pwl(quantize_per_tensor_5);  quantize_per_tensor_5 = None
    feature_extractor_blocks_1_1_scale_0 = self.feature_extractor_blocks_1_1_scale_0
    feature_extractor_blocks_1_1_zero_point_0 = self.feature_extractor_blocks_1_1_zero_point_0
    add_1 = torch.ops.quantized.add(feature_extractor_blocks_1_1_conv_pwl, feature_extractor_blocks_1_0_conv_pwl, feature_extractor_blocks_1_1_scale_0, feature_extractor_blocks_1_1_zero_point_0);  feature_extractor_blocks_1_1_conv_pwl = feature_extractor_blocks_1_0_conv_pwl = feature_extractor_blocks_1_1_scale_0 = feature_extractor_blocks_1_1_zero_point_0 = None
    feature_extractor_blocks_2_0_conv_pw = getattr(getattr(self.feature_extractor.blocks, "2"), "0").conv_pw(add_1);  add_1 = None
    feature_extractor_blocks_2_0_act1 = getattr(getattr(self.feature_extractor.blocks, "2"), "0").act1(feature_extractor_blocks_2_0_conv_pw);  feature_extractor_blocks_2_0_conv_pw = None
    feature_extractor_blocks_2_0_conv_dw = getattr(getattr(self.feature_extractor.blocks, "2"), "0").conv_dw(feature_extractor_blocks_2_0_act1);  feature_extractor_blocks_2_0_act1 = None
    feature_extractor_blocks_2_0_act2 = getattr(getattr(self.feature_extractor.blocks, "2"), "0").act2(feature_extractor_blocks_2_0_conv_dw);  feature_extractor_blocks_2_0_conv_dw = None
    dequantize_5 = feature_extractor_blocks_2_0_act2.dequantize();  feature_extractor_blocks_2_0_act2 = None
    feature_extractor_blocks_2_0_se = getattr(getattr(self.feature_extractor.blocks, "2"), "0").se(dequantize_5);  dequantize_5 = None
    feature_extractor_blocks_2_0_conv_pwl_input_scale_0 = self.feature_extractor_blocks_2_0_conv_pwl_input_scale_0
    feature_extractor_blocks_2_0_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_2_0_conv_pwl_input_zero_point_0
    feature_extractor_blocks_2_0_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_2_0_conv_pwl_input_dtype_0
    quantize_per_tensor_6 = torch.quantize_per_tensor(feature_extractor_blocks_2_0_se, feature_extractor_blocks_2_0_conv_pwl_input_scale_0, feature_extractor_blocks_2_0_conv_pwl_input_zero_point_0, feature_extractor_blocks_2_0_conv_pwl_input_dtype_0);  feature_extractor_blocks_2_0_se = feature_extractor_blocks_2_0_conv_pwl_input_scale_0 = feature_extractor_blocks_2_0_conv_pwl_input_zero_point_0 = feature_extractor_blocks_2_0_conv_pwl_input_dtype_0 = None
    feature_extractor_blocks_2_0_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "2"), "0").conv_pwl(quantize_per_tensor_6);  quantize_per_tensor_6 = None
    feature_extractor_blocks_2_1_conv_pw = getattr(getattr(self.feature_extractor.blocks, "2"), "1").conv_pw(feature_extractor_blocks_2_0_conv_pwl)
    feature_extractor_blocks_2_1_act1 = getattr(getattr(self.feature_extractor.blocks, "2"), "1").act1(feature_extractor_blocks_2_1_conv_pw);  feature_extractor_blocks_2_1_conv_pw = None
    feature_extractor_blocks_2_1_conv_dw = getattr(getattr(self.feature_extractor.blocks, "2"), "1").conv_dw(feature_extractor_blocks_2_1_act1);  feature_extractor_blocks_2_1_act1 = None
    feature_extractor_blocks_2_1_act2 = getattr(getattr(self.feature_extractor.blocks, "2"), "1").act2(feature_extractor_blocks_2_1_conv_dw);  feature_extractor_blocks_2_1_conv_dw = None
    dequantize_6 = feature_extractor_blocks_2_1_act2.dequantize();  feature_extractor_blocks_2_1_act2 = None
    feature_extractor_blocks_2_1_se = getattr(getattr(self.feature_extractor.blocks, "2"), "1").se(dequantize_6);  dequantize_6 = None
    feature_extractor_blocks_2_1_conv_pwl_input_scale_0 = self.feature_extractor_blocks_2_1_conv_pwl_input_scale_0
    feature_extractor_blocks_2_1_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_2_1_conv_pwl_input_zero_point_0
    feature_extractor_blocks_2_1_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_2_1_conv_pwl_input_dtype_0
    quantize_per_tensor_7 = torch.quantize_per_tensor(feature_extractor_blocks_2_1_se, feature_extractor_blocks_2_1_conv_pwl_input_scale_0, feature_extractor_blocks_2_1_conv_pwl_input_zero_point_0, feature_extractor_blocks_2_1_conv_pwl_input_dtype_0);  feature_extractor_blocks_2_1_se = feature_extractor_blocks_2_1_conv_pwl_input_scale_0 = feature_extractor_blocks_2_1_conv_pwl_input_zero_point_0 = feature_extractor_blocks_2_1_conv_pwl_input_dtype_0 = None
    feature_extractor_blocks_2_1_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "2"), "1").conv_pwl(quantize_per_tensor_7);  quantize_per_tensor_7 = None
    feature_extractor_blocks_2_1_scale_0 = self.feature_extractor_blocks_2_1_scale_0
    feature_extractor_blocks_2_1_zero_point_0 = self.feature_extractor_blocks_2_1_zero_point_0
    add_2 = torch.ops.quantized.add(feature_extractor_blocks_2_1_conv_pwl, feature_extractor_blocks_2_0_conv_pwl, feature_extractor_blocks_2_1_scale_0, feature_extractor_blocks_2_1_zero_point_0);  feature_extractor_blocks_2_1_conv_pwl = feature_extractor_blocks_2_0_conv_pwl = feature_extractor_blocks_2_1_scale_0 = feature_extractor_blocks_2_1_zero_point_0 = None
    feature_extractor_blocks_2_2_conv_pw = getattr(getattr(self.feature_extractor.blocks, "2"), "2").conv_pw(add_2)
    feature_extractor_blocks_2_2_act1 = getattr(getattr(self.feature_extractor.blocks, "2"), "2").act1(feature_extractor_blocks_2_2_conv_pw);  feature_extractor_blocks_2_2_conv_pw = None
    feature_extractor_blocks_2_2_conv_dw = getattr(getattr(self.feature_extractor.blocks, "2"), "2").conv_dw(feature_extractor_blocks_2_2_act1);  feature_extractor_blocks_2_2_act1 = None
    feature_extractor_blocks_2_2_act2 = getattr(getattr(self.feature_extractor.blocks, "2"), "2").act2(feature_extractor_blocks_2_2_conv_dw);  feature_extractor_blocks_2_2_conv_dw = None
    dequantize_7 = feature_extractor_blocks_2_2_act2.dequantize();  feature_extractor_blocks_2_2_act2 = None
    feature_extractor_blocks_2_2_se = getattr(getattr(self.feature_extractor.blocks, "2"), "2").se(dequantize_7);  dequantize_7 = None
    feature_extractor_blocks_2_2_conv_pwl_input_scale_0 = self.feature_extractor_blocks_2_2_conv_pwl_input_scale_0
    feature_extractor_blocks_2_2_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_2_2_conv_pwl_input_zero_point_0
    feature_extractor_blocks_2_2_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_2_2_conv_pwl_input_dtype_0
    quantize_per_tensor_8 = torch.quantize_per_tensor(feature_extractor_blocks_2_2_se, feature_extractor_blocks_2_2_conv_pwl_input_scale_0, feature_extractor_blocks_2_2_conv_pwl_input_zero_point_0, feature_extractor_blocks_2_2_conv_pwl_input_dtype_0);  feature_extractor_blocks_2_2_se = feature_extractor_blocks_2_2_conv_pwl_input_scale_0 = feature_extractor_blocks_2_2_conv_pwl_input_zero_point_0 = feature_extractor_blocks_2_2_conv_pwl_input_dtype_0 = None
    feature_extractor_blocks_2_2_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "2"), "2").conv_pwl(quantize_per_tensor_8);  quantize_per_tensor_8 = None
    feature_extractor_blocks_2_2_scale_0 = self.feature_extractor_blocks_2_2_scale_0
    feature_extractor_blocks_2_2_zero_point_0 = self.feature_extractor_blocks_2_2_zero_point_0
    add_3 = torch.ops.quantized.add(feature_extractor_blocks_2_2_conv_pwl, add_2, feature_extractor_blocks_2_2_scale_0, feature_extractor_blocks_2_2_zero_point_0);  feature_extractor_blocks_2_2_conv_pwl = add_2 = feature_extractor_blocks_2_2_scale_0 = feature_extractor_blocks_2_2_zero_point_0 = None
    feature_extractor_blocks_3_0_conv_pw = getattr(getattr(self.feature_extractor.blocks, "3"), "0").conv_pw(add_3);  add_3 = None
    feature_extractor_blocks_3_0_act1 = getattr(getattr(self.feature_extractor.blocks, "3"), "0").act1(feature_extractor_blocks_3_0_conv_pw);  feature_extractor_blocks_3_0_conv_pw = None
    feature_extractor_blocks_3_0_conv_dw = getattr(getattr(self.feature_extractor.blocks, "3"), "0").conv_dw(feature_extractor_blocks_3_0_act1);  feature_extractor_blocks_3_0_act1 = None
    feature_extractor_blocks_3_0_act2 = getattr(getattr(self.feature_extractor.blocks, "3"), "0").act2(feature_extractor_blocks_3_0_conv_dw);  feature_extractor_blocks_3_0_conv_dw = None
    dequantize_8 = feature_extractor_blocks_3_0_act2.dequantize();  feature_extractor_blocks_3_0_act2 = None
    feature_extractor_blocks_3_0_se = getattr(getattr(self.feature_extractor.blocks, "3"), "0").se(dequantize_8);  dequantize_8 = None
    feature_extractor_blocks_3_0_conv_pwl_input_scale_0 = self.feature_extractor_blocks_3_0_conv_pwl_input_scale_0
    feature_extractor_blocks_3_0_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_3_0_conv_pwl_input_zero_point_0
    feature_extractor_blocks_3_0_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_3_0_conv_pwl_input_dtype_0
    quantize_per_tensor_9 = torch.quantize_per_tensor(feature_extractor_blocks_3_0_se, feature_extractor_blocks_3_0_conv_pwl_input_scale_0, feature_extractor_blocks_3_0_conv_pwl_input_zero_point_0, feature_extractor_blocks_3_0_conv_pwl_input_dtype_0);  feature_extractor_blocks_3_0_se = feature_extractor_blocks_3_0_conv_pwl_input_scale_0 = feature_extractor_blocks_3_0_conv_pwl_input_zero_point_0 = feature_extractor_blocks_3_0_conv_pwl_input_dtype_0 = None
    feature_extractor_blocks_3_0_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "3"), "0").conv_pwl(quantize_per_tensor_9);  quantize_per_tensor_9 = None
    feature_extractor_blocks_3_1_conv_pw = getattr(getattr(self.feature_extractor.blocks, "3"), "1").conv_pw(feature_extractor_blocks_3_0_conv_pwl)
    feature_extractor_blocks_3_1_act1 = getattr(getattr(self.feature_extractor.blocks, "3"), "1").act1(feature_extractor_blocks_3_1_conv_pw);  feature_extractor_blocks_3_1_conv_pw = None
    feature_extractor_blocks_3_1_conv_dw = getattr(getattr(self.feature_extractor.blocks, "3"), "1").conv_dw(feature_extractor_blocks_3_1_act1);  feature_extractor_blocks_3_1_act1 = None
    feature_extractor_blocks_3_1_act2 = getattr(getattr(self.feature_extractor.blocks, "3"), "1").act2(feature_extractor_blocks_3_1_conv_dw);  feature_extractor_blocks_3_1_conv_dw = None
    dequantize_9 = feature_extractor_blocks_3_1_act2.dequantize();  feature_extractor_blocks_3_1_act2 = None
    feature_extractor_blocks_3_1_se = getattr(getattr(self.feature_extractor.blocks, "3"), "1").se(dequantize_9);  dequantize_9 = None
    feature_extractor_blocks_3_1_conv_pwl_input_scale_0 = self.feature_extractor_blocks_3_1_conv_pwl_input_scale_0
    feature_extractor_blocks_3_1_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_3_1_conv_pwl_input_zero_point_0
    feature_extractor_blocks_3_1_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_3_1_conv_pwl_input_dtype_0
    quantize_per_tensor_10 = torch.quantize_per_tensor(feature_extractor_blocks_3_1_se, feature_extractor_blocks_3_1_conv_pwl_input_scale_0, feature_extractor_blocks_3_1_conv_pwl_input_zero_point_0, 

(part 3 of 3)

feature_extractor_blocks_3_1_conv_pwl_input_dtype_0);  feature_extractor_blocks_3_1_se = feature_extractor_blocks_3_1_conv_pwl_input_scale_0 = feature_extractor_blocks_3_1_conv_pwl_input_zero_point_0 = feature_extractor_blocks_3_1_conv_pwl_input_dtype_0 = None
    feature_extractor_blocks_3_1_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "3"), "1").conv_pwl(quantize_per_tensor_10);  quantize_per_tensor_10 = None
    feature_extractor_blocks_3_1_scale_0 = self.feature_extractor_blocks_3_1_scale_0
    feature_extractor_blocks_3_1_zero_point_0 = self.feature_extractor_blocks_3_1_zero_point_0
    add_4 = torch.ops.quantized.add(feature_extractor_blocks_3_1_conv_pwl, feature_extractor_blocks_3_0_conv_pwl, feature_extractor_blocks_3_1_scale_0, feature_extractor_blocks_3_1_zero_point_0);  feature_extractor_blocks_3_1_conv_pwl = feature_extractor_blocks_3_0_conv_pwl = feature_extractor_blocks_3_1_scale_0 = feature_extractor_blocks_3_1_zero_point_0 = None
    feature_extractor_blocks_3_2_conv_pw = getattr(getattr(self.feature_extractor.blocks, "3"), "2").conv_pw(add_4)
    feature_extractor_blocks_3_2_act1 = getattr(getattr(self.feature_extractor.blocks, "3"), "2").act1(feature_extractor_blocks_3_2_conv_pw);  feature_extractor_blocks_3_2_conv_pw = None
    feature_extractor_blocks_3_2_conv_dw = getattr(getattr(self.feature_extractor.blocks, "3"), "2").conv_dw(feature_extractor_blocks_3_2_act1);  feature_extractor_blocks_3_2_act1 = None
    feature_extractor_blocks_3_2_act2 = getattr(getattr(self.feature_extractor.blocks, "3"), "2").act2(feature_extractor_blocks_3_2_conv_dw);  feature_extractor_blocks_3_2_conv_dw = None
    dequantize_10 = feature_extractor_blocks_3_2_act2.dequantize();  feature_extractor_blocks_3_2_act2 = None
    feature_extractor_blocks_3_2_se = getattr(getattr(self.feature_extractor.blocks, "3"), "2").se(dequantize_10);  dequantize_10 = None
    feature_extractor_blocks_3_2_conv_pwl_input_scale_0 = self.feature_extractor_blocks_3_2_conv_pwl_input_scale_0
    feature_extractor_blocks_3_2_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_3_2_conv_pwl_input_zero_point_0
    feature_extractor_blocks_3_2_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_3_2_conv_pwl_input_dtype_0
    quantize_per_tensor_11 = torch.quantize_per_tensor(feature_extractor_blocks_3_2_se, feature_extractor_blocks_3_2_conv_pwl_input_scale_0, feature_extractor_blocks_3_2_conv_pwl_input_zero_point_0, feature_extractor_blocks_3_2_conv_pwl_input_dtype_0);  feature_extractor_blocks_3_2_se = feature_extractor_blocks_3_2_conv_pwl_input_scale_0 = feature_extractor_blocks_3_2_conv_pwl_input_zero_point_0 = feature_extractor_blocks_3_2_conv_pwl_input_dtype_0 = None
    feature_extractor_blocks_3_2_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "3"), "2").conv_pwl(quantize_per_tensor_11);  quantize_per_tensor_11 = None
    feature_extractor_blocks_3_2_scale_0 = self.feature_extractor_blocks_3_2_scale_0
    feature_extractor_blocks_3_2_zero_point_0 = self.feature_extractor_blocks_3_2_zero_point_0
    add_5 = torch.ops.quantized.add(feature_extractor_blocks_3_2_conv_pwl, add_4, feature_extractor_blocks_3_2_scale_0, feature_extractor_blocks_3_2_zero_point_0);  feature_extractor_blocks_3_2_conv_pwl = add_4 = feature_extractor_blocks_3_2_scale_0 = feature_extractor_blocks_3_2_zero_point_0 = None
    feature_extractor_blocks_3_3_conv_pw = getattr(getattr(self.feature_extractor.blocks, "3"), "3").conv_pw(add_5)
    feature_extractor_blocks_3_3_act1 = getattr(getattr(self.feature_extractor.blocks, "3"), "3").act1(feature_extractor_blocks_3_3_conv_pw);  feature_extractor_blocks_3_3_conv_pw = None
    feature_extractor_blocks_3_3_conv_dw = getattr(getattr(self.feature_extractor.blocks, "3"), "3").conv_dw(feature_extractor_blocks_3_3_act1);  feature_extractor_blocks_3_3_act1 = None
    feature_extractor_blocks_3_3_act2 = getattr(getattr(self.feature_extractor.blocks, "3"), "3").act2(feature_extractor_blocks_3_3_conv_dw);  feature_extractor_blocks_3_3_conv_dw = None
    dequantize_11 = feature_extractor_blocks_3_3_act2.dequantize();  feature_extractor_blocks_3_3_act2 = None
    feature_extractor_blocks_3_3_se = getattr(getattr(self.feature_extractor.blocks, "3"), "3").se(dequantize_11);  dequantize_11 = None
    feature_extractor_blocks_3_3_conv_pwl_input_scale_0 = self.feature_extractor_blocks_3_3_conv_pwl_input_scale_0
    feature_extractor_blocks_3_3_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_3_3_conv_pwl_input_zero_point_0
    feature_extractor_blocks_3_3_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_3_3_conv_pwl_input_dtype_0
    quantize_per_tensor_12 = torch.quantize_per_tensor(feature_extractor_blocks_3_3_se, feature_extractor_blocks_3_3_conv_pwl_input_scale_0, feature_extractor_blocks_3_3_conv_pwl_input_zero_point_0, feature_extractor_blocks_3_3_conv_pwl_input_dtype_0);  feature_extractor_blocks_3_3_se = feature_extractor_blocks_3_3_conv_pwl_input_scale_0 = feature_extractor_blocks_3_3_conv_pwl_input_zero_point_0 = feature_extractor_blocks_3_3_conv_pwl_input_dtype_0 = None
    feature_extractor_blocks_3_3_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "3"), "3").conv_pwl(quantize_per_tensor_12);  quantize_per_tensor_12 = None
    feature_extractor_blocks_3_3_scale_0 = self.feature_extractor_blocks_3_3_scale_0
    feature_extractor_blocks_3_3_zero_point_0 = self.feature_extractor_blocks_3_3_zero_point_0
    add_6 = torch.ops.quantized.add(feature_extractor_blocks_3_3_conv_pwl, add_5, feature_extractor_blocks_3_3_scale_0, feature_extractor_blocks_3_3_zero_point_0);  feature_extractor_blocks_3_3_conv_pwl = add_5 = feature_extractor_blocks_3_3_scale_0 = feature_extractor_blocks_3_3_zero_point_0 = None
    feature_extractor_blocks_4_0_conv_pw = getattr(getattr(self.feature_extractor.blocks, "4"), "0").conv_pw(add_6);  add_6 = None
    feature_extractor_blocks_4_0_act1 = getattr(getattr(self.feature_extractor.blocks, "4"), "0").act1(feature_extractor_blocks_4_0_conv_pw);  feature_extractor_blocks_4_0_conv_pw = None
    feature_extractor_blocks_4_0_conv_dw = getattr(getattr(self.feature_extractor.blocks, "4"), "0").conv_dw(feature_extractor_blocks_4_0_act1);  feature_extractor_blocks_4_0_act1 = None
    feature_extractor_blocks_4_0_act2 = getattr(getattr(self.feature_extractor.blocks, "4"), "0").act2(feature_extractor_blocks_4_0_conv_dw);  feature_extractor_blocks_4_0_conv_dw = None
    dequantize_12 = feature_extractor_blocks_4_0_act2.dequantize();  feature_extractor_blocks_4_0_act2 = None
    feature_extractor_blocks_4_0_se = getattr(getattr(self.feature_extractor.blocks, "4"), "0").se(dequantize_12);  dequantize_12 = None
    feature_extractor_blocks_4_0_conv_pwl_input_scale_0 = self.feature_extractor_blocks_4_0_conv_pwl_input_scale_0
    feature_extractor_blocks_4_0_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_4_0_conv_pwl_input_zero_point_0
    feature_extractor_blocks_4_0_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_4_0_conv_pwl_input_dtype_0
    quantize_per_tensor_13 = torch.quantize_per_tensor(feature_extractor_blocks_4_0_se, feature_extractor_blocks_4_0_conv_pwl_input_scale_0, feature_extractor_blocks_4_0_conv_pwl_input_zero_point_0, feature_extractor_blocks_4_0_conv_pwl_input_dtype_0);  feature_extractor_blocks_4_0_se = feature_extractor_blocks_4_0_conv_pwl_input_scale_0 = feature_extractor_blocks_4_0_conv_pwl_input_zero_point_0 = feature_extractor_blocks_4_0_conv_pwl_input_dtype_0 = None
    feature_extractor_blocks_4_0_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "4"), "0").conv_pwl(quantize_per_tensor_13);  quantize_per_tensor_13 = None
    feature_extractor_blocks_4_1_conv_pw = getattr(getattr(self.feature_extractor.blocks, "4"), "1").conv_pw(feature_extractor_blocks_4_0_conv_pwl)
    feature_extractor_blocks_4_1_act1 = getattr(getattr(self.feature_extractor.blocks, "4"), "1").act1(feature_extractor_blocks_4_1_conv_pw);  feature_extractor_blocks_4_1_conv_pw = None
    feature_extractor_blocks_4_1_conv_dw = getattr(getattr(self.feature_extractor.blocks, "4"), "1").conv_dw(feature_extractor_blocks_4_1_act1);  feature_extractor_blocks_4_1_act1 = None
    feature_extractor_blocks_4_1_act2 = getattr(getattr(self.feature_extractor.blocks, "4"), "1").act2(feature_extractor_blocks_4_1_conv_dw);  feature_extractor_blocks_4_1_conv_dw = None
    dequantize_13 = feature_extractor_blocks_4_1_act2.dequantize();  feature_extractor_blocks_4_1_act2 = None
    feature_extractor_blocks_4_1_se = getattr(getattr(self.feature_extractor.blocks, "4"), "1").se(dequantize_13);  dequantize_13 = None
    feature_extractor_blocks_4_1_conv_pwl_input_scale_0 = self.feature_extractor_blocks_4_1_conv_pwl_input_scale_0
    feature_extractor_blocks_4_1_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_4_1_conv_pwl_input_zero_point_0
    feature_extractor_blocks_4_1_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_4_1_conv_pwl_input_dtype_0
    quantize_per_tensor_14 = torch.quantize_per_tensor(feature_extractor_blocks_4_1_se, feature_extractor_blocks_4_1_conv_pwl_input_scale_0, feature_extractor_blocks_4_1_conv_pwl_input_zero_point_0, feature_extractor_blocks_4_1_conv_pwl_input_dtype_0);  feature_extractor_blocks_4_1_se = feature_extractor_blocks_4_1_conv_pwl_input_scale_0 = feature_extractor_blocks_4_1_conv_pwl_input_zero_point_0 = feature_extractor_blocks_4_1_conv_pwl_input_dtype_0 = None
    feature_extractor_blocks_4_1_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "4"), "1").conv_pwl(quantize_per_tensor_14);  quantize_per_tensor_14 = None
    feature_extractor_blocks_4_1_scale_0 = self.feature_extractor_blocks_4_1_scale_0
    feature_extractor_blocks_4_1_zero_point_0 = self.feature_extractor_blocks_4_1_zero_point_0
    add_7 = torch.ops.quantized.add(feature_extractor_blocks_4_1_conv_pwl, feature_extractor_blocks_4_0_conv_pwl, feature_extractor_blocks_4_1_scale_0, feature_extractor_blocks_4_1_zero_point_0);  feature_extractor_blocks_4_1_conv_pwl = feature_extractor_blocks_4_0_conv_pwl = feature_extractor_blocks_4_1_scale_0 = feature_extractor_blocks_4_1_zero_point_0 = None
    feature_extractor_blocks_4_2_conv_pw = getattr(getattr(self.feature_extractor.blocks, "4"), "2").conv_pw(add_7)
    feature_extractor_blocks_4_2_act1 = getattr(getattr(self.feature_extractor.blocks, "4"), "2").act1(feature_extractor_blocks_4_2_conv_pw);  feature_extractor_blocks_4_2_conv_pw = None
    feature_extractor_blocks_4_2_conv_dw = getattr(getattr(self.feature_extractor.blocks, "4"), "2").conv_dw(feature_extractor_blocks_4_2_act1);  feature_extractor_blocks_4_2_act1 = None
    feature_extractor_blocks_4_2_act2 = getattr(getattr(self.feature_extractor.blocks, "4"), "2").act2(feature_extractor_blocks_4_2_conv_dw);  feature_extractor_blocks_4_2_conv_dw = None
    dequantize_14 = feature_extractor_blocks_4_2_act2.dequantize();  feature_extractor_blocks_4_2_act2 = None
    feature_extractor_blocks_4_2_se = getattr(getattr(self.feature_extractor.blocks, "4"), "2").se(dequantize_14);  dequantize_14 = None
    feature_extractor_blocks_4_2_conv_pwl_input_scale_0 = self.feature_extractor_blocks_4_2_conv_pwl_input_scale_0
    feature_extractor_blocks_4_2_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_4_2_conv_pwl_input_zero_point_0
    feature_extractor_blocks_4_2_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_4_2_conv_pwl_input_dtype_0
    quantize_per_tensor_15 = torch.quantize_per_tensor(feature_extractor_blocks_4_2_se, feature_extractor_blocks_4_2_conv_pwl_input_scale_0, feature_extractor_blocks_4_2_conv_pwl_input_zero_point_0, feature_extractor_blocks_4_2_conv_pwl_input_dtype_0);  feature_extractor_blocks_4_2_se = feature_extractor_blocks_4_2_conv_pwl_input_scale_0 = feature_extractor_blocks_4_2_conv_pwl_input_zero_point_0 = feature_extractor_blocks_4_2_conv_pwl_input_dtype_0 = None
    feature_extractor_blocks_4_2_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "4"), "2").conv_pwl(quantize_per_tensor_15);  quantize_per_tensor_15 = None
    feature_extractor_blocks_4_2_scale_0 = self.feature_extractor_blocks_4_2_scale_0
    feature_extractor_blocks_4_2_zero_point_0 = self.feature_extractor_blocks_4_2_zero_point_0
    add_8 = torch.ops.quantized.add(feature_extractor_blocks_4_2_conv_pwl, add_7, feature_extractor_blocks_4_2_scale_0, feature_extractor_blocks_4_2_zero_point_0);  feature_extractor_blocks_4_2_conv_pwl = add_7 = feature_extractor_blocks_4_2_scale_0 = feature_extractor_blocks_4_2_zero_point_0 = None
    feature_extractor_blocks_5_0_conv_pw = getattr(getattr(self.feature_extractor.blocks, "5"), "0").conv_pw(add_8);  add_8 = None
    feature_extractor_blocks_5_0_act1 = getattr(getattr(self.feature_extractor.blocks, "5"), "0").act1(feature_extractor_blocks_5_0_conv_pw);  feature_extractor_blocks_5_0_conv_pw = None
    feature_extractor_blocks_5_0_conv_dw = getattr(getattr(self.feature_extractor.blocks, "5"), "0").conv_dw(feature_extractor_blocks_5_0_act1);  feature_extractor_blocks_5_0_act1 = None
    feature_extractor_blocks_5_0_act2 = getattr(getattr(self.feature_extractor.blocks, "5"), "0").act2(feature_extractor_blocks_5_0_conv_dw);  feature_extractor_blocks_5_0_conv_dw = None
    dequantize_15 = feature_extractor_blocks_5_0_act2.dequantize();  feature_extractor_blocks_5_0_act2 = None
    feature_extractor_blocks_5_0_se = getattr(getattr(self.feature_extractor.blocks, "5"), "0").se(dequantize_15);  dequantize_15 = None
    feature_extractor_blocks_5_0_conv_pwl_input_scale_0 = self.feature_extractor_blocks_5_0_conv_pwl_input_scale_0
    feature_extractor_blocks_5_0_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_5_0_conv_pwl_input_zero_point_0
    feature_extractor_blocks_5_0_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_5_0_conv_pwl_input_dtype_0
    quantize_per_tensor_16 = torch.quantize_per_tensor(feature_extractor_blocks_5_0_se, feature_extractor_blocks_5_0_conv_pwl_input_scale_0, feature_extractor_blocks_5_0_conv_pwl_input_zero_point_0, feature_extractor_blocks_5_0_conv_pwl_input_dtype_0);  feature_extractor_blocks_5_0_se = feature_extractor_blocks_5_0_conv_pwl_input_scale_0 = feature_extractor_blocks_5_0_conv_pwl_input_zero_point_0 = feature_extractor_blocks_5_0_conv_pwl_input_dtype_0 = None
    feature_extractor_blocks_5_0_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "5"), "0").conv_pwl(quantize_per_tensor_16);  quantize_per_tensor_16 = None
    feature_extractor_blocks_5_1_conv_pw = getattr(getattr(self.feature_extractor.blocks, "5"), "1").conv_pw(feature_extractor_blocks_5_0_conv_pwl)
    feature_extractor_blocks_5_1_act1 = getattr(getattr(self.feature_extractor.blocks, "5"), "1").act1(feature_extractor_blocks_5_1_conv_pw);  feature_extractor_blocks_5_1_conv_pw = None
    feature_extractor_blocks_5_1_conv_dw = getattr(getattr(self.feature_extractor.blocks, "5"), "1").conv_dw(feature_extractor_blocks_5_1_act1);  feature_extractor_blocks_5_1_act1 = None
    feature_extractor_blocks_5_1_act2 = getattr(getattr(self.feature_extractor.blocks, "5"), "1").act2(feature_extractor_blocks_5_1_conv_dw);  feature_extractor_blocks_5_1_conv_dw = None
    dequantize_16 = feature_extractor_blocks_5_1_act2.dequantize();  feature_extractor_blocks_5_1_act2 = None
    feature_extractor_blocks_5_1_se = getattr(getattr(self.feature_extractor.blocks, "5"), "1").se(dequantize_16);  dequantize_16 = None
    feature_extractor_blocks_5_1_conv_pwl_input_scale_0 = self.feature_extractor_blocks_5_1_conv_pwl_input_scale_0
    feature_extractor_blocks_5_1_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_5_1_conv_pwl_input_zero_point_0
    feature_extractor_blocks_5_1_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_5_1_conv_pwl_input_dtype_0
    quantize_per_tensor_17 = torch.quantize_per_tensor(feature_extractor_blocks_5_1_se, feature_extractor_blocks_5_1_conv_pwl_input_scale_0, feature_extractor_blocks_5_1_conv_pwl_input_zero_point_0, feature_extractor_blocks_5_1_conv_pwl_input_dtype_0);  feature_extractor_blocks_5_1_se = feature_extractor_blocks_5_1_conv_pwl_input_scale_0 = feature_extractor_blocks_5_1_conv_pwl_input_zero_point_0 = feature_extractor_blocks_5_1_conv_pwl_input_dtype_0 = None
    feature_extractor_blocks_5_1_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "5"), "1").conv_pwl(quantize_per_tensor_17);  quantize_per_tensor_17 = None
    feature_extractor_blocks_5_1_scale_0 = self.feature_extractor_blocks_5_1_scale_0
    feature_extractor_blocks_5_1_zero_point_0 = self.feature_extractor_blocks_5_1_zero_point_0
    add_9 = torch.ops.quantized.add(feature_extractor_blocks_5_1_conv_pwl, feature_extractor_blocks_5_0_conv_pwl, feature_extractor_blocks_5_1_scale_0, feature_extractor_blocks_5_1_zero_point_0);  feature_extractor_blocks_5_1_conv_pwl = feature_extractor_blocks_5_0_conv_pwl = feature_extractor_blocks_5_1_scale_0 = feature_extractor_blocks_5_1_zero_point_0 = None
    feature_extractor_blocks_5_2_conv_pw = getattr(getattr(self.feature_extractor.blocks, "5"), "2").conv_pw(add_9)
    feature_extractor_blocks_5_2_act1 = getattr(getattr(self.feature_extractor.blocks, "5"), "2").act1(feature_extractor_blocks_5_2_conv_pw);  feature_extractor_blocks_5_2_conv_pw = None
    feature_extractor_blocks_5_2_conv_dw = getattr(getattr(self.feature_extractor.blocks, "5"), "2").conv_dw(feature_extractor_blocks_5_2_act1);  feature_extractor_blocks_5_2_act1 = None
    feature_extractor_blocks_5_2_act2 = getattr(getattr(self.feature_extractor.blocks, "5"), "2").act2(feature_extractor_blocks_5_2_conv_dw);  feature_extractor_blocks_5_2_conv_dw = None
    dequantize_17 = feature_extractor_blocks_5_2_act2.dequantize();  feature_extractor_blocks_5_2_act2 = None
    feature_extractor_blocks_5_2_se = getattr(getattr(self.feature_extractor.blocks, "5"), "2").se(dequantize_17);  dequantize_17 = None
    feature_extractor_blocks_5_2_conv_pwl_input_scale_0 = self.feature_extractor_blocks_5_2_conv_pwl_input_scale_0
    feature_extractor_blocks_5_2_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_5_2_conv_pwl_input_zero_point_0
    feature_extractor_blocks_5_2_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_5_2_conv_pwl_input_dtype_0
    quantize_per_tensor_18 = torch.quantize_per_tensor(feature_extractor_blocks_5_2_se, feature_extractor_blocks_5_2_conv_pwl_input_scale_0, feature_extractor_blocks_5_2_conv_pwl_input_zero_point_0, feature_extractor_blocks_5_2_conv_pwl_input_dtype_0);  feature_extractor_blocks_5_2_se = feature_extractor_blocks_5_2_conv_pwl_input_scale_0 = feature_extractor_blocks_5_2_conv_pwl_input_zero_point_0 = feature_extractor_blocks_5_2_conv_pwl_input_dtype_0 = None
    feature_extractor_blocks_5_2_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "5"), "2").conv_pwl(quantize_per_tensor_18);  quantize_per_tensor_18 = None
    feature_extractor_blocks_5_2_scale_0 = self.feature_extractor_blocks_5_2_scale_0
    feature_extractor_blocks_5_2_zero_point_0 = self.feature_extractor_blocks_5_2_zero_point_0
    add_10 = torch.ops.quantized.add(feature_extractor_blocks_5_2_conv_pwl, add_9, feature_extractor_blocks_5_2_scale_0, feature_extractor_blocks_5_2_zero_point_0);  feature_extractor_blocks_5_2_conv_pwl = add_9 = feature_extractor_blocks_5_2_scale_0 = feature_extractor_blocks_5_2_zero_point_0 = None
    feature_extractor_blocks_6_0_conv_pw = getattr(getattr(self.feature_extractor.blocks, "6"), "0").conv_pw(add_10);  add_10 = None
    feature_extractor_blocks_6_0_act1 = getattr(getattr(self.feature_extractor.blocks, "6"), "0").act1(feature_extractor_blocks_6_0_conv_pw);  feature_extractor_blocks_6_0_conv_pw = None
    feature_extractor_blocks_6_0_conv_dw = getattr(getattr(self.feature_extractor.blocks, "6"), "0").conv_dw(feature_extractor_blocks_6_0_act1);  feature_extractor_blocks_6_0_act1 = None
    feature_extractor_blocks_6_0_act2 = getattr(getattr(self.feature_extractor.blocks, "6"), "0").act2(feature_extractor_blocks_6_0_conv_dw);  feature_extractor_blocks_6_0_conv_dw = None
    dequantize_18 = feature_extractor_blocks_6_0_act2.dequantize();  feature_extractor_blocks_6_0_act2 = None
    feature_extractor_blocks_6_0_se = getattr(getattr(self.feature_extractor.blocks, "6"), "0").se(dequantize_18);  dequantize_18 = None
    feature_extractor_blocks_6_0_conv_pwl_input_scale_0 = self.feature_extractor_blocks_6_0_conv_pwl_input_scale_0
    feature_extractor_blocks_6_0_conv_pwl_input_zero_point_0 = self.feature_extractor_blocks_6_0_conv_pwl_input_zero_point_0
    feature_extractor_blocks_6_0_conv_pwl_input_dtype_0 = self.feature_extractor_blocks_6_0_conv_pwl_input_dtype_0
    quantize_per_tensor_19 = torch.quantize_per_tensor(feature_extractor_blocks_6_0_se, feature_extractor_blocks_6_0_conv_pwl_input_scale_0, feature_extractor_blocks_6_0_conv_pwl_input_zero_point_0, feature_extractor_blocks_6_0_conv_pwl_input_dtype_0);  feature_extractor_blocks_6_0_se = feature_extractor_blocks_6_0_conv_pwl_input_scale_0 = feature_extractor_blocks_6_0_conv_pwl_input_zero_point_0 = feature_extractor_blocks_6_0_conv_pwl_input_dtype_0 = None
    feature_extractor_blocks_6_0_conv_pwl = getattr(getattr(self.feature_extractor.blocks, "6"), "0").conv_pwl(quantize_per_tensor_19);  quantize_per_tensor_19 = None
    feature_extractor_conv_head = self.feature_extractor.conv_head(feature_extractor_blocks_6_0_conv_pwl);  feature_extractor_blocks_6_0_conv_pwl = None
    feature_extractor_act2 = self.feature_extractor.act2(feature_extractor_conv_head);  feature_extractor_conv_head = None
    dequantize_19 = feature_extractor_act2.dequantize();  feature_extractor_act2 = None
    feature_extractor_global_pool_pool = self.feature_extractor.global_pool.pool(dequantize_19);  dequantize_19 = None
    feature_extractor_classifier = self.feature_extractor.classifier(feature_extractor_global_pool_pool);  feature_extractor_global_pool_pool = None
    getattr_1 = feature_extractor_classifier.shape
    getitem = getattr_1[-2];  getattr_1 = None
    max_pool2d_1 = torch.nn.functional.max_pool2d(feature_extractor_classifier, (getitem, 1), stride = 1, padding = 0, dilation = 1, ceil_mode = False, return_indices = False);  feature_extractor_classifier = getitem = None
    squeeze_1 = max_pool2d_1.squeeze(2);  max_pool2d_1 = None
    permute = squeeze_1.permute(0, 2, 1);  squeeze_1 = None
    rnn = self.rnn(permute);  permute = None
    getitem_1 = rnn[0]
    getitem_2 = rnn[1];  rnn = None
    relu_1 = self.relu(getitem_1);  getitem_1 = None
    dropout_1 = self.dropout(relu_1);  relu_1 = None
    fc_input_scale_0 = self.fc_input_scale_0
    fc_input_zero_point_0 = self.fc_input_zero_point_0
    fc_input_dtype_0 = self.fc_input_dtype_0
    quantize_per_tensor_20 = torch.quantize_per_tensor(dropout_1, fc_input_scale_0, fc_input_zero_point_0, fc_input_dtype_0);  dropout_1 = fc_input_scale_0 = fc_input_zero_point_0 = fc_input_dtype_0 = None
    fc = self.fc(quantize_per_tensor_20);  quantize_per_tensor_20 = None
    dequantize_20 = fc.dequantize();  fc = None
    return dequantize_20

fbgemm does have a reduce_range config: pytorch/qconfig.py at master · pytorch/pytorch · GitHub, so if you are using qnnpack qconfig and run on fbgemm backends (which is the default backend) I think, it will have overflows. But I’m not sure why setting backend to qnnpack would result in error, did you set the qengine before convert?

Yes I did set the qengine before convert. Perhaps I can make you a full working example with code + data and all? If so, I’ll go ahead and make one on Github and share it with you