FengMu1995
(Feng Mu1995)
1
I quantized both my model and input to “int8”, but an error generated,
why does the model require that input type is float
could you provide more context, ideally a reproducible example?
FengMu1995
(Feng Mu1995)
3
Hi, I’ve found the error and remove the “normalize”, it worked.