Quantized CNN model gives the wrong output on Android

I trained a CNN image deblurring model and exported a .ptl model following the tutorial. It worked well on my demo app (can give a sharp output for blurred image).

After that I used quant aware training with qnnpack backend. The output is still good when I reload the quantized .ptl model using torch.jit.load in Python. However, the output comes bad when I use it on my app. There is no running errors but the result is different. I did not change any java code in my app compared with the model before quantization.

I’ve optimized the model like this:

script_module = torch.jit.script(model)
optimized_scripted_module = optimize_for_mobile(script_module )
optimized_scripted_module._save_for_lite_interpreter("mobile_classification.ptl")

Library used for Pytorch Android :

implementation 'org.pytorch:pytorch_android_lite:1.9.0'
implementation 'org.pytorch:pytorch_android_torchvision_lite:1.9.0'