Quantizing model I'm hitting createStatus == pytorch_qnnp_status_success INTERNAL ASSERT FAILED

I’m quantizing and converting to tflite this model: GitHub - Picsart-AI-Research/MI-GAN: [ICCV 2023] MI-GAN: A Simple Baseline for Image Inpainting on Mobile Devices
To do that I’m using this tool: GitHub - alibaba/TinyNeuralNetwork: TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.

I’m in the last step of conversion and I’m hitting:
RuntimeError: createStatus == pytorch_qnnp_status_success INTERNAL ASSERT FAILED at "../aten/src/ATen/native/quantized/cpu/BinaryOps.cpp":203, please report a bug to PyTorch. failed to create QNNPACK Add operator

In the output I notice an error:
Error in QNNPACK: failed to create add operator with 8.319039e-06 A-to-output scale ratio: scale ratio must be in [2**-14, 2**8) range
I can see that error in QNNPACK/src/add.c at 7d2a4e9931a82adc3814275b6219a03e24e36b4c · pytorch/QNNPACK · GitHub
Is this because I’m adding too things that are on very different scales? Could it be a warning, or does it make the quantization impossible?
It’s a bit hard to work out what’s going on and how I can work around it.

maybe you can try to set a min value for scale, we have this in pytorch: pytorch/torch/ao/quantization/observer.py at main · pytorch/pytorch · GitHub

I’m not sure if this is relevant but I’ve never seen this error before