Manual quantization vs prequantized model

Hi!

I’m working on an academic quantization project over Mobilenetv3. I have my Mobilenetv3_large already trained with my own dataset and now i want to perform some tests with dynamic quantization and QAT to compare it with my baseline.

I have did an intense search and I found several posts with manual modifications on the architecture or other ones directly redefining the entire model (most of them on mobilenetv2).

My question is: Does it make sense to fine tune the mobilenetv3 quantized model with my own dataset?

Thank you beforehand!

Victor

depends on your goal, if you want something incredibly small, performant and accurate like a phone speech recognition app then probably yes.

If you just want to perform some tests then it doesn’t seem worth it since the point would be to test how these APIs work on a given model, right?